Using Docker Now!


With modern technology, here’s the pattern I’ve noticed since college. New tech comes out and I can see that it’s neat, but not how I can make use of it. A few years later, I finally come across the right article and it all makes sense to me. I first noticed this with VMs. I couldn’t see a reason to want to use it outside of a server context. Then I used it to review Linux distros. Then I used it to run my network’s services. The same happened with tablets, smart phones, and Docker.

When everyone kept hyping up Docker I couldn’t figure out why it’d be useful to me. It seemed overly complex compared to VMs. And if I wanted to have lots of isolated services running, Linux Containers (LXC) seemed a lot easier and closer to what I was used to. In fact in a Linux on Linux (host:hypervisor) situation, containers seem superior to VMs in every way.

But Red Hat supports Docker. Maybe it’s because Ubuntu was championing LXC and they seem to abandon stuff all the time like Google. (Unity being the latest casualty) And I was having some issues with the version of LXC on CentOS 7 having some issues – like freezing up while running yum or not running Apache. So I decided to explore Docker again.

Since the last time I came across Docker, I got into Flatpack and AppImage and suddenly Docker made sense again for someone outside of DevOps (where it always made sense to me). Using containers means I can run an app with a consistent set of libraries independent of what’s on the system or being used by other apps. So I used Docker to run phpIPAM and while it’s still a little more complicated than I’d like, but not too bad now that I have my head around the concept.

Of course, because things always change when I join them, apparently Docker is Moby now? Here are some less cynical takes: From Docker themselves and from InfoWorld

,