Hitting alt-F2 then typing email (contact name – eg Danielle) and enter and then it presents me with an email window to send an email. No need to navigate to gmail.com or go over to the screen running Kmail (actually, usually Kontact).
It’s been a while since btrfs was first introduced to me via a Fedora version that had it as the default filesystem. At the time, it was especially brittle when it came to power outages. I ended up losing a system to one such use case. But a few years ago, I started using btrfs on my home directory. And even developed a program to manage snapshots. My two favorite features of btrfs are that Copy on Write (COW) allows me to make snapshots that only take up space when the file that was snapshot changes and the ability to dynamically set up and grow RAID levels. I was able to use this recently to get my photo hard drive on RAID1 without having to have an extra hard drive (because most RAID solutions destroy what’s on the drive).
However, btrfs has been plagued with some important issues – for example RAID5/6 is unstable, not recommended, and after many years still hasn’t solved the write hole. (Something the very similar ZFS has had solved for years) Look online and you’ll find scores of tales of people who have suffered unrecoverable data loss from btrfs.
A few years ago Red Hat deprecated btrfs on RHEL6. That makes sense given the long support times of the RHEL releases. The team at Red Hat has to backport kernel fixes and that gets complicated as time goes by. btrfs has grown by leaps and bounds since RHEL6. But a couple days ago (when I write this – 10 days before the blog post is going to appear), Red Hat announced it was getting deprecated on RHEL7. There was lots of speculation on the net and someone who used to hack on btrfs for RHEL mentioned that since he left, no one at Red Hat worked on it. Suse is the distro that employes btrfs hackers at this point. Then, yesterday, Stratis was announced. From the Phoronix article I read about Stratis in:
First a quote from the announcement of Stratis:
Stratis is a new tool that meets the needs of Red Hat Enterprise Linux (RHEL) users calling for an easily configured, tightly integrated solution for storage that works within the existing Red Hat storage management stack. To achieve this, Stratis prioritizes a straightforward command-line experience, a rich API, and a fully automated, externally-opaque approach to storage management. It builds upon elements of the existing storage stack as much as possible, to enable delivery within 1-2 years. Specifically, Stratis initially plans to use device-mapper and the XFS filesystem. Extending or building on SSM 2.1.1 or LVM 2.1.2 was carefully considered. SSM did not meet the design requirements, but building upon LVM may be possible with some development effort.
From the Wikipage describing that it’s going to land in Fedora 28:
a local storage system akin to Btrfs, ZFS, and LVM. Its goal is to enable easier setup and management of disks and SSDs, as well as enabling the use of advanced storage features — such as thin provisioning, snapshots, integrity, and a cache tier — without requiring expert-level storage administration knowledge. Furthermore, Stratis includes monitoring and repair capabilities, and a programmatic API, for better integration with higher levels of system management software.
Then from the author of the Phoronix article:
For Stratis 1.0 they hope to support snapshot management, file-system maintenance, and more. With Stratis 2.0 is where they plan to deal with RAID, write-through caching, quotas, etc. With Stratis 3.0 is where it should get interesting as they hope for “rough ZFS feature parity” and support send/receive, integrity checking, RAID scrubbing, compression, encryption, deduplication, and more. Only in the first half of 2018 is when they expect to reach Stratis 1.0. No word on when they anticipate getting to Stratis 3.0 with ZFS feature parity.
Interesting. I led me on a path of exploration of LVM and other tech. First of all, I don’t imagine btrfs is going to sit still, unworked on, while this happens. Maybe it finally reaches its stability goals. Maybe the threat of Stratis attracts more hackers to btrfs. Or, maybe Stratis catches up with, and surpasses, btrfs. I think if they can make the dynamic RAID work, and can get stability up to ZFS levels, I could move over to Stratis. If not, I’m still thinking about LVM and XFS or ext4 for my home-built NAS rather than btrfs (or together with btrfs if it doesn’t get too complex for snapshotting purposes) because that would (potentially) let me grow directories indefinitely in a way that works as my backup needs grow. This will require more knowledge and planning, though. I’ll keep documenting my research here.
With modern technology, here’s the pattern I’ve noticed since college. New tech comes out and I can see that it’s neat, but not how I can make use of it. A few years later, I finally come across the right article and it all makes sense to me. I first noticed this with VMs. I couldn’t see a reason to want to use it outside of a server context. Then I used it to review Linux distros. Then I used it to run my network’s services. The same happened with tablets, smart phones, and Docker.
When everyone kept hyping up Docker I couldn’t figure out why it’d be useful to me. It seemed overly complex compared to VMs. And if I wanted to have lots of isolated services running, Linux Containers (LXC) seemed a lot easier and closer to what I was used to. In fact in a Linux on Linux (host:hypervisor) situation, containers seem superior to VMs in every way.
But Red Hat supports Docker. Maybe it’s because Ubuntu was championing LXC and they seem to abandon stuff all the time like Google. (Unity being the latest casualty) And I was having some issues with the version of LXC on CentOS 7 having some issues – like freezing up while running yum or not running Apache. So I decided to explore Docker again.
Since the last time I came across Docker, I got into Flatpack and AppImage and suddenly Docker made sense again for someone outside of DevOps (where it always made sense to me). Using containers means I can run an app with a consistent set of libraries independent of what’s on the system or being used by other apps. So I used Docker to run phpIPAM and while it’s still a little more complicated than I’d like, but not too bad now that I have my head around the concept.
In a post about how security has changed, Josh Bressers had this great bit of info in how some people are living in the past when it comes to understanding technology:
If you listen to my podcast (which you should be doing already), I had a bit of a rant at the start this week about an assignment my son had over the weekend. He wasn’t supposed to use any “screens” which is part of a drug addiction lesson. I get where this lesson is going, but I’ve really been thinking about the bigger idea of expectations and reality. This assignment is a great example of someone failing to understand the world has changed around them.
What I mean is expecting anyone to go without a “screen” for a weekend doesn’t make sense. A substantial number of activities we do today rely on some sort of screen because we’ve replace more inefficient ways of accomplishing tasks with these screens. Need to look something up? That’s a screen. What’s the weather? Screen. News? Screen. Reading a book? Screen!
You get the idea. We’ve replaced a large number of books or papers with a screen.
This was the status at the end of the scrub:
[root@supermario ~]# /usr/sbin/btrfs scrub start -Bd /media/Photos/ scrub device /dev/sdd1 (id 1) done scrub started at Tue Mar 21 17:18:13 2017 and finished after 05:49:29 total bytes scrubbed: 2.31TiB with 0 errors scrub device /dev/sda1 (id 2) done scrub started at Tue Mar 21 17:18:13 2017 and finished after 05:20:56 total bytes scrubbed: 2.31TiB with 0 errors
I’m a bit perplexed at this information. Since this is a RAID1, I would expect it to be comparing info between disks – is this not so? If not, why? Because I would have expected both disks to end at the same time. Also, interesting to note that the 1TB/hr stopped being the case at some point.
Here’s the output of the status command:
[root@supermario ~]# btrfs scrub status /media/Photos/ scrub status for 27cc1330-c4e3-404f-98f6-f23becec76b5 scrub started at Tue Mar 21 17:18:13 2017, running for 01:05:38 total bytes scrubbed: 1.00TiB with 0 errors
So on Fedora 25 with an AMD-8323 (8 core, no hyperthreading) and 24GB of RAM with this hard drive and its 3TB brother in RAID1 , it takes about an hour per Terabyte to do a scrub. (Which seems about equal to what a coworker told me his system takes to do a zfs scrub – 40ish hours for about 40ish TB)
A little under 3 years ago, I started exploring btrfs for its ability to help me limit data loss. Since then I’ve implemented a snapshot script to take advantage of the Copy-on-Write features of btrfs. But I hadn’t yet had the funds and the PC case space to do RAID1. I finally was able to implement it for my photography hard drive. This means that, together with regular scrubs, I should have a near miniscule chance of bit rot ruining any photos it hasn’t already corrupted.
Here’s a documentation of some commands and how I got the drives into RAID1:
Before RAID: # btrfs fi df -h /media/Photos Data, single: total=2.31TiB, used=2.31TiB System, DUP: total=8.00MiB, used=272.00KiB System, single: total=4.00MiB, used=0.00B Metadata, DUP: total=3.50GiB, used=2.68GiB Metadata, single: total=8.00MiB, used=0.00B GlobalReserve, single: total=512.00MiB, used=0.00B # btrfs fi usage /media/Photos Overall: Device size: 2.73TiB Device allocated: 2.32TiB Device unallocated: 423.48GiB Device missing: 0.00B Used: 2.31TiB Free (estimated): 425.29GiB (min: 213.55GiB) Data ratio: 1.00 Metadata ratio: 2.00 Global reserve: 512.00MiB (used: 5.64MiB) Data,single: Size:2.31TiB, Used:2.31TiB /dev/sdd1 2.31TiB Metadata,single: Size:8.00MiB, Used:0.00B /dev/sdd1 8.00MiB Metadata,DUP: Size:3.50GiB, Used:2.68GiB /dev/sdd1 7.00GiB System,single: Size:4.00MiB, Used:0.00B /dev/sdd1 4.00MiB System,DUP: Size:8.00MiB, Used:272.00KiB /dev/sdd1 16.00MiB Unallocated: /dev/sdd1 423.48GiB [root@supermario ~]# btrfs device add /dev/sda1 /media/Photos/ /dev/sda1 appears to contain an existing filesystem (btrfs). ERROR: use the -f option to force overwrite of /dev/sda1 [root@supermario ~]# btrfs device add /dev/sda1 /media/Photos/ -f [root@supermario ~]# btrfs fi usage /media/Photos Overall: Device size: 6.37TiB Device allocated: 2.32TiB Device unallocated: 4.05TiB Device missing: 0.00B Used: 2.31TiB Free (estimated): 4.05TiB (min: 2.03TiB) Data ratio: 1.00 Metadata ratio: 2.00 Global reserve: 512.00MiB (used: 0.00B) Data,single: Size:2.31TiB, Used:2.31TiB /dev/sdd1 2.31TiB Metadata,single: Size:8.00MiB, Used:0.00B /dev/sdd1 8.00MiB Metadata,DUP: Size:3.50GiB, Used:2.68GiB /dev/sdd1 7.00GiB System,single: Size:4.00MiB, Used:0.00B /dev/sdd1 4.00MiB System,DUP: Size:8.00MiB, Used:272.00KiB /dev/sdd1 16.00MiB Unallocated: /dev/sda1 3.64TiB /dev/sdd1 423.48GiB [root@supermario ~]# btrfs balance start -dconvert=raid1 -mconvert=raid1 /media/Photos/ Done, had to relocate 2374 out of 2374 chunks Post-RAID: [root@supermario ~]# btrfs fi usage /media/Photos Overall: Device size: 6.37TiB Device allocated: 4.63TiB Device unallocated: 1.73TiB Device missing: 0.00B Used: 4.62TiB Free (estimated): 891.01GiB (min: 891.01GiB) Data ratio: 2.00 Metadata ratio: 2.00 Global reserve: 512.00MiB (used: 0.00B) Data,RAID1: Size:2.31TiB, Used:2.31TiB /dev/sda1 2.31TiB /dev/sdd1 2.31TiB Metadata,RAID1: Size:7.00GiB, Used:2.56GiB /dev/sda1 7.00GiB /dev/sdd1 7.00GiB System,RAID1: Size:64.00MiB, Used:368.00KiB /dev/sda1 64.00MiB /dev/sdd1 64.00MiB Unallocated: /dev/sda1 1.32TiB /dev/sdd1 422.46GiB [root@supermario ~]# btrfs fi df -h /media/Photos Data, RAID1: total=2.31TiB, used=2.31TiB System, RAID1: total=64.00MiB, used=368.00KiB Metadata, RAID1: total=7.00GiB, used=2.56GiB GlobalReserve, single: total=512.00MiB, used=0.00B
And here’s the status of my first scub to test out the commands:
[root@supermario ~]# btrfs scrub status /media/Photos/ scrub status for 27cc1330-c4e3-404f-98f6-f23becec76b5 scrub started at Tue Mar 21 17:18:13 2017, running for 00:09:10 total bytes scrubbed: 145.57GiB with 0 errors
I’ve both added and dropped some podcasts since last time around. Where I’m listing the same podcast as last year I may use the same description as in the past with slight (or no) variation.
Giant Beastcast – The East Coast Giant Bomb crew. This podcast is more about video game culture and news stories. It spends a lot less time on the “what you’ve been playing” section. I’ve actually grown to enjoy this one way more than the Bombcast because of the focus on the cultural and news aspects.
Radiolab – Heard about them because sometimes their stories are used on This American Life. Radiolab is a lot like TAL except with a much bigger focus on sound effects. It is, in a way, the descendent of the old radio shows of the 30s and 40s. (Approx 30-45 min)
Marketplace – This is a really good economics show. They talk about news that happened that day as well as stories that have been pre-prepared. This podcast has really helped me to understand the recession and why it happened as well as whether it is getting any better. (Approx 30 min long)
Codebreaker: A tech podcast. Season 1 asked the question “Is it Evil?” of various technologies.
On the Media – Although not always perfect and although it leans a little more left than moderate, On the Media is a good podcast about media issues. Examples include: truth in advertising, misleading news stories on the cable networks, debunking PR-speak from the White House, and other media literacy items. I tend to enjoy it nearly all the time and it’s a good balance to news on both sides of the spectrum, calling out CNN as often as Fox News. (Approx 1 hour long)
Fresh Air – Fresh Air is one of NPR’s most famous shows. It is similar in topic scope as Talk of the Nation, but without any listener call-in. Also, it tends to have a heavier focus on cultural topics (books, movies, etc). Terry Gross has been hosting Fresh Air for decades and is a master at interviewing her guests. Every once in a while there is a guest host or the interview is conducted by a specialist in that industry. (Approx 1 hour)
Freakonomics – Essentially an audio, episodic version of the eponymous book. If you enjoyed the insights of the book, you’ll really enjoy this podcast. (Approx 30 min)
The Infinite Monkey Cage – a BBC radio show about science. A panel of scientists (and one media star who is interested in science) talk about a topic. The only bummer is that the shows are quite infrequent. Something like 4 weekly episodes per quarter (Approx 30 min)
Dan Carlin’s Hardcore History – if you’re a history buff you really need to be listening to this podcast. Dan’s well-researched podcast presents bits of history you never heard of in ways you never thought of it. He does a great job of making ancient societies relate-able. The only bad thing is that there is a long gap between episodes due to the research involved. (Varies. Approx 1.5 – 4 hrs)
The Dollop – A very funny and very profane look at American history. The premise: The host tells a story of American history to the other guy, who doesn’t know ahead of time what the story’s about. It’s a premise that leads to some great reactions from the person not in the know (usually Gareth, but sometimes they do a Reverse Dollop). Also, listening to this podcast is a great reminder that the past is full of some really messed up people and situations.
WTF with Marc Maron – This is a pretty solid podcast which mostly consists of Marc Maron interviewing comedians. As with any interview-based show, the episodes are hit or miss, although more often than not they are really good. Occasionally he does a live show in which he’s still interviewing people, but with 4-6 per episode it’s much less in-depth. And, since it has an audience, the guest is performing more than being open. The only irritating thing is that Marc starts off each episode with a rant/listener email reading. Most of the time this is neither interesting nor funny. Clearly the reason people are tuning is is to hear the interviews or they’d take up a minority of the show instead of the bulk of the show. So I wish he’d do his rant at the end of the episode so that those of us who just want to hear a great interview with a comedian we like can easily skip the monologue. (Approx 1.5 hours long)
Science Fiction Short Stories
There isn’t much to differentiate these two podcasts. They both feature great selections of short stories. I added them to my podcatcher to get a dose of fiction among the more non-fiction podcasts I usually listen to. Also, there’s something great about short-form fiction where you have to build the world AND tell the story in a very concise way. The main difference between the two podcasts is that Clarkesworld has pretty much just one narrator who’s quite incredible. Escape Pod tends to have a group of narrators. Most of them are great – every once in a while there’s a less than stellar one. Clarkesworld tends to end the story with the narrator’s interpretation and Escape Pod tends to end with reader comments from a few episodes ago. (varies. 15 min to 45 min)
How Did This Get Made – Paul Scheer, June Diane Raphael and Jason Mantzoukas (plus the occasional guest) watch movies from the last few decades that will probably be in the future’s version of Mystery Science Theatre 3000. The movies are often incredibly baffling and full of strange plot points. One of the best parts of the show is “Second Opinions” where Paul goes to Amazon.com to get 5 Star ratings for the movie they just spent about an hour lambasting. Every other episode is a mini episode that previews the next show, has a section called “Corrections and Omissions”, and Qs and As. The first two sections are great. The last one varies depending on the quality of the questions and answers. It can be pretty funny, but sometimes I just skip it. (Approx 1 hr)
The Bugle – Jon Oliver (from The Daily Show) and some other guy talk about the news. In a way, it’s like a How Did This Get Made for news. Also similar to The Daily Show in the incredulity of what people in the news are doing. (Approx 30 min)
Uh, Yeah Dude – tagline: “America through the eyes of two American Americans” If you like My Brother, My Brother, and Me, you’ll probably like this podcast’s style. They talk about both important news and cultural news and generally make fun of it. I call the commentary smart dumb commentary, it’s like Seth Rogan movies – the characters are providing smart insight through dumb commentary. (Approx 1 hour)
Political Gabfest (from Slate) – This has taken the role that Talk of the Nation’s Wednesday slot left vacant when the show went off the air. They talk about politics (usually swinging heavily left or sometimes libertarian while ToTN was more neutral) and I get a dose of what everyone’s talking about in politics. (Approximatly 1 hour)
Common Sense with Dan Carlin – If you like the attention Dan puts towards Hardcore History, then you’ll probably love this take on the news. Usually Dan takes one (max 2) topics from the news and by the time he’s done with it, I’ve seen 2-3 different points of view. Sometimes there’s a clearly right point of view (the sky is blue), but other times each side has valid points and neither one has the complete high ground. Dan is a complex creature, like many of us. On some topics he’s more likely to agree with Dems, other time Republicans, and sometimes neither. Other times he agrees with their Platonic Ideal Version, but not their RealPolitik version. Either way, I’m always overjoyed when it shows up – which is somewhere between biweekly and monthly. (Approximately 45 minutes)
FiveThirtyEight Elections – a great, wonky podcast from the guys that brought you the most accurate election predictions. Has continued beyond the elections due to the odd circumstances of the new administration.
Sword and Laser – A fantasy and sci-fi book club. They interview up-and-coming authors and discuss the book club’s monthly book. Also cover news and upcoming new releases. (Varies. Approx 30 min)
Rocket Talk (Tor.com) – The host speaks with one or two Science Fiction and Fantasy authors about various things: their latest book, trends in the genres, publishing trends, etc. Sometimes a great show and sometimes I skip it halfway through. (Approximately 45 min)
Give Me Fiction – A pretty hilarious (to my sense of humor) super short story podcast. It’s recorded live (which often spices up comedy) and seems to skew Gen X/Millenial in its humor. (Varies, but usually under 15 minutes)
Talkin’ Toons with Rob Paulsen – The great voice actor behind two Ninja Turtles, Pinky, Yakko, and many, many other cartoon characters interviews other voice actors. It’s like WTF, but without the annoying self-reflection 10-15 minutes that I always skip on Maron’s podcast. If you enjoy voice acting nerdom or want a place to start, check this out. It’s recorded in front of an audience which is often great, but once in a while leads them on tangents that take away from their great anecdotes. (Approximately 1 hour)
Boars, Gore, and Swords: A Game of Throne Podcast – two comedians (and sometimes some friends) discuss each episode of A Game of Thrones and each chapter of the books. While it’s primarily funny, it does sometimes lead me to some deeper insights into each episode.
The i Word: An Image Comics Podcast – different writers and artists working on a comic for Image Comics are interviewed about their comic as well as something unrelated to comics that they’re really into.
The Allusionist – a podcast about words, where they come from, and how we use them
You Are Not So Smart – the host, who wrote an eponymous book, tackles topics of self-delusion. Examples include placebos, alternative medicine, and conspiracy theories. (Approximately 45 min)
Probably Science – some comedians who used to work in the science and tech fields bring on other comedians (of various levels of scientific knowledge) to discuss pop science and where the articles might be misleading.
99% Invisible – Similar in scope to the NPR podcast Invisibilia, this one was there first. It explores the things that are in the background of life. Examples include architectural details we often miss or stories that tell how regions came to be. Production is similar in sonic greatness to RadioLab. (Approx 15 min)
Tell Me Something I don’t Know – a gameshow from the guys behind Freakonomics. Learn some new facts in a fun and often funny way.
GoodMuslimBadMuslim – a window into what it’s like to be a Muslim in modern America.
Politically Reactive – W. Kamau Bell and Hari Kondabolu discuss politics with some jokes and some interviews with people mostly on the left, but sometimes on the right. They are respectful and always provide context to what’s being said.
More Perfect – Explores Supreme Court rulings and how they affect America.
Song Exploder – they pick a song and a member from that band explains how they put it together. They usually look at each layer of the track – vocals, drums, guitar, etc and talk about why each decision was made. Can range from interesting to revealing.
Continuing my LXC project, let’s list the installed containers:
That just shows the name of the container – lemmy. For completion’s sake, I’m going to start it as a daemon in the background rather than being sent straight into the console:
lxc-start -n lemmy -d
As per usual Linux SOP, it produced no output. Now to jump in:
lxc-console -n lemmy
That told me I was connected to tty1, but did not present a login. Quitting out via Ctrl-a q let me go back to the VM’s tty, but trying again did not get me login. There’s some weird issue that doesn’t allow it to work, however, this did:
lxc-attach -n lemmy
I’m not 100% sure why it works and console doesn’t, but there seems to be discussion about systemd causing issues. At any rate, the only limitation of lxc-attach is that the user doing it has to also exist on the container. However, given that these are server boxes, root is fine and so it works.
Unfortunately, networking does not work. That’ll be for next time.
I’m continuing on from yesterday’s post to get the VM ready to host LXC. I’m starting with Centos 7 so the first thing I had to do was enable the epel repos:
yum install epel-release
Then, according to the guide I was following, I had to also install these package:
yum install debootstrap perl libvirt
That installed a bunch of stuff. I also get that they’re trying to break out what they’re doing, but they probably could have installed both that and the LXC stuff below in one blow:
yum install lxc lxc-templates
Then start the services we just installed:
systemctl start lxc.service systemctl start libvirtd
Then, a good thing to do to make sure everything’s working correctly is to run the following:
If you get all “enabled” (in Centos 7 it’s also green), then you’re in good shape. You can see which templates you have installed with the following command:
ls -alh /usr/share/lxc/templates/
When I did that, I had alpine, altlinux, busybox, centos, cirros, debian, fedora, gentoo, openmandriva, opensuse, oracle, ubuntu, and ubuntu-cloud.
As my last act of this post, I’ll create my first container:
lxc-create -n lemmy -t centos
This is going to run Cockpit to keep an eye on servers on my network. After running that command, it looked like a yum or dnf install was happening. Then it did some more stuff and then told me what the root password was. It also told me how to change it without having to start the container. So I did that. Next time…starting and running a container.
As I mentioned before, I’m looking at Linux Containers (LXC) to have a higher density virtualization. To get ready for that, I had to create a network bridge to allow the containers to be accessible on the network.
First I installed bridge-utils:
yum install bridge-utils -y
After that, I had to create the network script:
In there I placed:
DEVICE="virbr0" BOOTPROTO="static" IPADDR="192.168.1.35" #IP address of the VM NETMASK="255.255.255.0" GATEWAY="192.168.1.1" DNS1="192.168.1.7" ONBOOT="yes" TYPE="Bridge"
Then, since my ethernet on this machine is eth0
and after a
systemctl restart network
it was supposedly working. I was able to ping www.google.com. We’ll see what happens when I start installing LXC Containers.
I updated Rawhide and ended up with this login screen. I like it – I think mostly because of the font.
Back when I first was working on replacing my Pogoplug (the original BabyLuigi), I was looking at potentially using it to learn about Docker in addition to creating virtual machines that were actually useful instead of just playing around with VMs for looking at Linux distros. The benefit of Docker was to have the isolation of VMs without the overhead of VMs. Also, since it was trending pretty hard, I figured it’d be good for my career to have some experience with it. So I spent a few weeks researching Docker and playing around with some of the online demos. I read lots about how it was used and how to avoid the usual pitfalls. But in the end I went with a VM that did a bit more than I wanted; I’d wanted to separate services so that updating one thing wouldn’t cause me to lose everything. However, the more I looked into it, the more it looked like unnecessary headache without enough of a benefit. Dockers were SO isolated that if you wanted to run a LAMP stack you had to run at least 3 Docker containers and find a way to string them together and have a separate pool of storage they could all access.
Recently I’ve been hanging around in the Home Lab subreddit. There are a lot of people in there like me who enjoy learning about computing and using it to make things easier (if a bit more complex) at home. It pairs well with the Self Hosted subreddit, another thing that is important to me because of how many services have been changed or dropped (see Google Reader) from under me. I prefer to be in charge of things on my own. In that subreddit I heard about LXC – Linux Containers. I was intrigued and after I came across it again, I did some research. LXC is what I wanted Docker to be (which makes sense since Docker is forked from LXC). It allows you to run what is essentially a VM without the overhead of simulating hardware. I’ve seen some webpages that claim you can get 41 LXC containers where you’d be able to fit about 2 VMs. I haven’t seen anything that high in the Home Lab subreddit, but I HAVE seen some pretty impressive densities. Since I don’t have the money for a computer that could hold the number of VMs I’d love to run, I’m going to be exploring LXC containers. I’ll blog about my progress so you can learn along with me. I’m pretty excited about learning this new tech.
I created this video to help people learn how easy it is with Libvirt, KVM, and QEMU to have multiple monitors in your virtual machines.
Going to do some summer cleaning on my VMs, so I wanted to document peak KVM as a reminder of how many I had running at this time: