Another piece falls into place for Docker

Yesterday I was at a conference dedicated to DevOps and so Red Hat and Google were there to talk about containers, especially Docker and Kubernetes. While summarizing it to some of my employees today, I was asked about what I see as the benefits of Docker containers relative to Virtual Machines. I mentioned that one of the great things is that Docker containers are immutable. All of your data’s actually written to a folder that’s essentially mounted in the container.

Then today while I was walking through the neighborhood with Stella I was thinking about that and suddenly into my head popped a reddit discussion I had with someone on /r/Datahoarders. This person had a bunch of computer OS backups with his data intermingled. I mentioned he was doing things wrong – he should have his data on a data drive or in a NAS, separate from the OS so that he doesn’t need OS backups, only backups of settings and his personal files. On my Linux computer I do this by having a separate home drive. On my Windows computer I’ve mapped “My Documents” to a separate drive.

And the lightbulb went off. Right now with my VMs, I’m backing up an entire VM, taking up gigabytes of space and taking lots of time to back up. Instead, with Docker containers I could merely worry about backing up the data store. The containers themselves don’t matter because I can always just grab them again off the repos.

So it looks like it’s time for me to learn OpenShift (since I’m all-in for Red Hat, I may as well learn their distro of Kubernetes) so I can better orchestrate all of this now that I’m moving from just a couple containers. Plus it’ll be fun to learn!

Using Docker Now!

With modern technology, here’s the pattern I’ve noticed since college. New tech comes out and I can see that it’s neat, but not how I can make use of it. A few years later, I finally come across the right article and it all makes sense to me. I first noticed this with VMs. I couldn’t see a reason to want to use it outside of a server context. Then I used it to review Linux distros. Then I used it to run my network’s services. The same happened with tablets, smart phones, and Docker.

When everyone kept hyping up Docker I couldn’t figure out why it’d be useful to me. It seemed overly complex compared to VMs. And if I wanted to have lots of isolated services running, Linux Containers (LXC) seemed a lot easier and closer to what I was used to. In fact in a Linux on Linux (host:hypervisor) situation, containers seem superior to VMs in every way.

But Red Hat supports Docker. Maybe it’s because Ubuntu was championing LXC and they seem to abandon stuff all the time like Google. (Unity being the latest casualty) And I was having some issues with the version of LXC on CentOS 7 having some issues – like freezing up while running yum or not running Apache. So I decided to explore Docker again.

Since the last time I came across Docker, I got into Flatpack and AppImage and suddenly Docker made sense again for someone outside of DevOps (where it always made sense to me). Using containers means I can run an app with a consistent set of libraries independent of what’s on the system or being used by other apps. So I used Docker to run phpIPAM and while it’s still a little more complicated than I’d like, but not too bad now that I have my head around the concept.

Of course, because things always change when I join them, apparently Docker is Moby now? Here are some less cynical takes: From Docker themselves and from InfoWorld