The podman saga continues. The podman equivalent of a docker-compose.yml can be created from a pod with the following command:
podman generate kube (name of pod) > (filename).yaml
So I did that with the pod that I’d created with an SELinux context. Now it was time to try it on another Fedora 31 VM to see if it would work. To be on the safe side, I started off creating the phpIPAM folder, chowning it to nobody and chmoding it to 777.
Then I ran:
# podman play kube phpIPAM-withSELinux.yaml
That triggered it to grab the images from Docker Hub. As per usual, the CPU spiked like crazy as it did Podman things. Not sure if this is due to the VM, Podman not having a daemon, something else…. But just something to note. At the end it printed out:
Interestingly, it doesn’t appear that I had to punch a hole through the firewall this time. Perhaps that was just a consequence of me not knowing exactly what was happening on my first attempts with Podman.
Unfortunately, the SELinux :Z attribute doesn’t appear to have come over. That makes sense as when I did a diff with the previous yaml I’d created, I didn’t see anything about that. First let me try the setsebool command.
# setsebool -P container_manage_cgroup true
So now I want to try one of the other commands I found while trying to figure out the SELinux issue.
# chcon -Rt svirt_sandbox_file_t phpipam/
Then I rebooted the container. This does not appear to be enough to get it working. The SELinux page had some solutions it wanted me to type. So I try those.
Last time I messed around with Podman, I finally got things working and had what I think was a pretty good understanding of how to go forward. But in order to get things working, I’d had to turn off SELinux. Now it was time to see what I had to do to make Podman work with SELinux. I’ve got some ideas based on some Googling and might also need to try a program called udica to create the right contexts.
First of all, when I rebooted the VM, I noticed that the pod was stopped. So eventually I’ll need to figure out how to use systemd to bring it up on boot. I noted that SELinux was on after reboot. I wanted to first see if maybe setting things up with SELinux off and then turning it on would lead to a working situation. (Also, I was learning a lot when setting things up before, maybe I never needed to turn it off) I didn’t see any SELinux complaints. So I tried to load the page. SELinux was blocking MySQL from writing to the directory (and, apparently, reading) and so the site loaded up brand new as if I’d never configured the database.
I stopped the pod again. Then I tried this command first:
# setsebool -P container_manage_cgroup true
The computer did its thing. I started the pod again. The same issue occurred. Both the documentation I’d consulted and someone on reddit had mentioned using the “:Z” option on the mount to get SELinux to be OK with it. As far as I know, I can’t change it on the container that’s already a part of the pod. Instead, I need to remove the container and create a new one from the image with the :Z option on the mount. So I tried that. After removing:
The fourth game we worked on was another game that I spent a lot of my childhood playing. We made a Galaxian/Galaga clone:
My mom’s youngest brother had a Nintendo and lots of arcade ports. When I was young he lived in the condo above my grandmother’s condo and whenever we’d go visit her, I’d ask if I could visit him so we could play games. The game I loved playing the most there was Galaga because of the frantic pace.
As I did last time, I documented the concepts I learned on my github page, but I think the one that will probably get the most use in any games I make going forward is the coroutine. It’s a way of writing a function so that it will do some stuff and then wait until something else happens. Usually we used it to wait a few seconds to add a pause to that particular method.
Below is a video of me playing the version we coded in the class. I hadn’t yet figured out how to get things to look right with the resolution since the instructor had us do a tall arcade-style screen. Computers expect to play horizontal games so it didn’t want to cooperate at first.
I haven’t added any new features as I wanted to get caught up on some other tasks that I’d let languish wile going through this module, but I do have some plans to implement high scores.
A big difference from 9 years ago is that I don’t use the dedicated ereader as much as I used to. Mostly that comes down to the fact that I don’t read as much before bed and I have limited other places to use the ereader. Usually I’m either reading on my phone or on the computer. But there’s one time that I really love the ereader – when I’m traveling, particularly by plane. This way I can read during the entire trip without draining my cell phone battery. As Scarlett has gotten old enough to read, I figured she could have the Nook (to keep from straining her eyes constantly with the backlit tablet) and I’d still want an ereader for travel. Additionally, who knows – I might go back to more reading at home or before bed when a backlit phone just isn’t ideal. So I got the Kobo Clara HD.
What I think is interesting is that the experience of getting one of these set up is mostly the same experience as 9 years ago, except no wall wart was included for the USB charger. So you’re either stuck with using a computer, the wall wart of one of your cell phones, or a wall outlet with a USB port.
My Nook was so old that it only had the bottom interface as a touch interface. To do anything like select a footnote, you had to VERY SLOWLY scroll across the screen to the number and then wait for it to load up the new page. Having a touch screen reader is so much more convenient. Additionally, the book selection screen, shown above, is much more informative and useful. At the time that the Nook came out, I thought people were being fussy and making excuses for paper books by complaining about the page turning speed. But, as something that doesn’t happen that often (I’m a fast reader and I wasn’t blowing through pages), I thought it was fine. It did get incredibly annoying if I was reading a non-fiction book with footnotes (or a fiction book using them humorously) because then there was a lot of page reloading. The Kobo turns so fast, at this point it would take me longer to turn a page in a paper book.
The Kobo is so tiny, I can truly fit it in my pocket (and did just that on my last plane trip) – here are some size comparison photos:
Even though the screens are about the same size, I got the highest DPI Kobo I could, and so the text is so much crisper and less straining on my eyes.
The fact that it has a light that sits under the side panels (so it’s front-lit like a book light) is also much better for my eyes, especially when reading at night. It’s also adjustable, which is great.
That said, I will miss the Nook’s sleep mode page with its famous authors. The Kobo shows the cover of the book you’re reading:
So far the only complaint I have is that the Kobo Clara HD is very picky about which USB port it’ll use to sync on my computer. I had to try a bunch of them until I found one that the Kobo was fine with. Until the most recent update, one could complain that side-loaded books (the vast majority of my collection) were second class citizens unless one used the Kepub plugin for Calibre. But now they seem to function the same way as the books bought through Kobo’s own shop.
I’m pretty happy that I got it, and I hope it can get me through the next nine years of reading ebooks.
When I switched to Twenty Nineteen almost a year ago, I wasn’t sure if I was going to stick with the theme so I never did my theme documentation blog post. So here’s what the previous theme, Twenty Sixteen, which I had for about 2.5 years, looked like:
Over the years I’ve taken many, many photos of my kids at Coney Island. Lots of them have come out great. But I think this batch of photos is among the best photos I’ve taken of the kids at Coney Island thus far.
In some cases, it’s the expressions on the twins’ faces.
Other times, I succeeded in getting the perfect action shot.
I don’t have much else to say other than the fact that I’m glad the kids are enjoying the simplicity of Coney Island’s ride while they’re still young enough to do so.
Adding the –name mysql – it wasn’t enough to get the PHPIpam apache container to find the mysql container. They’re in the same pod, but something’s not quite right. So I decided to see if I could modify the config.php by mounting the container and modifying that.
# podman mount beautiful_gauss
While this allowed me to see the config files and open them in vi (not included in the container) I could not modify the contents. I think the key is passing “phpIPAM5” (or whatever the pod is called) into the MYSQL_ENV_MYSQL_HOST environment variable. So let’s try that. First, I had to stop phpIPAM5. I’ve been using
# podman pod rm phpIPAM4 -f
The force allows it to remove the pods. So I’m going to go back to just phpIPAM for the pod name without any number suffixes. I continue to note that, for some reason, Podman tends to cause huge spikes in CPU usage when doing stuff around pods, particularly creation and destruction. Once the pod’s running, I don’t see any huge CPU issues. But starting and stopping definitely takes a lot longer than Docker containers do.
Unfortunately, that seems to lead to a pod that won’t answer on 8081. Going back to my previous pod wouldn’t load either. But the WEIRD thing is that even though I had SELinux turned off, I kept getting logs like this:
SELinux is preventing mysqld from unlink access on the file phpIPAM5.lower-test.
And that’s incredibly weird. Also, for some reason the user of my phpipam-podman folder had become systemd-coredump instead of root. ALSO VERY WEIRD. In dmesg I see something that may explain the problem….
overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
So maybe I need to set things up to use a different directory. So I set up another directory.
# mkdir phpipam-podman6
And let’s try again! So…new directory AND environment variable. MAYBE we’ll get something that works…
Nope. I think this is more frustrating for the fact that it worked once, even if the lack of environment variables kept the install from working than if it had never worked at all. Ugh. I give up. Maybe podman is not the Docker replacement it’s supposed to be?
Like these random container names that podman generated:
# podman ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
b83a26bb2c5d docker.io/library/mysql:5.6 mysqld 2 minutes ago Up 2 minutes ago 0.0.0.0:8081->80/tcp hungry_wilson
f35ec64d3b3c docker.io/pierrecdn/phpipam:latest apache2-foregroun… 2 minutes ago Up 2 minutes ago 0.0.0.0:8081->80/tcp nice_johnson
Makes me think perhaps there should be a list of adjectives and names that shouldn’t go together?
I’ve been hearing about Podman for a while now – at Red Hat Summit and at various local Red Hat presentations. I’ve seen the slides where the RHEL presenter (it’s always the same guy, but I’m terrible with names – after a bit of research, I think it’s Dan Walsh) asks you to pledge to call them container images, not Docker images, etc. But up until now, even though I’m a huge Red Hat fan, I’ve continued to use Docker as my container engine because I am just running a few containers for myself. I don’t even use a one-machine Docker Swarm. I use docker-compose. And that’s just not something that Podman is ever going to officially support. This makes sense because Red Hat is thinking enterprise. And in the enterprise there are two scenarios: 1) Orchestration – vanilla Kubernetes, OpenShift, etc – and 2) are devs running docker run (or podman run) to test the images before putting them into the orchestrator. I’m an anti-pattern, even if I’m not the only one doing things this way.
Recently I’ve been thinking of converting over to Podman from Docker. There are a few reasons. First of all, Docker requires running a daemon. Not only does that use more resources, but it provides a target for exploitation. In fact, there’s currently a crypto-miner worm workings its way through vulnerable Docker servers. Also, the daemon runs as root and that makes it dangerous if there’s an escape. Podman doesn’t have a daemon. And it is built to be able to be run by users. Again, remember that the use case for Podman that Red Hat is targeting is the dev on his own computer or laptop that’s testing something that will eventually be put onto an orchestrator. So they want them to be able to run Podman as a regular user.
But there’s another reason and it’s something so subtle that it’s been escaping me until this week. If you look at all of Red Hat’s material convincing folks to use Podman, it’s all around how it’s a drop-in replacement for Docker. But recently it’s dawned on me, as I’ve read a bit more about podman and reflected on various talks I’ve heard about it, there’s something hidden right there in the name. Why is it “pod” man and not “container” man? Well, besides containerman being a much longer command to type, it’s because while on the surface podman is about RHEL’s Docker replacement, under the hood it’s related to Kubernetes. In Kubernetes (from now on referred to as k8s) the smallest unit of management is the pod (which can have 1 to many containers). So when you run Podman as a drop-in for Docker (Red Hat even mentions using alias to help with muscle memory), it’s just creating 1-container pods. But you could actually use podman to create multi-container pods. And, in the same way that docker-compose.yml is used both for Docker Compose and Docker Swarm, the yaml that you get from Podman can be used for Kubernetes distros.
So, then comes the reason for the question that is the title of this blog post. If I’m going to transition from Docker to Podman, I’m pretty sure it’s not going to happen perfectly without any issues. After all, the Podman as drop-in replacement for Docker works in the simplest of use-cases. But some of the containers I’m using – like Calibre-web – may be making use of Docker-isms rather than standard OCI container features. So I’d like to be able to do a phased transition so that I don’t have to either take an entire day off from work or spend a weekend trying to get everything working. A weekend in which the family is upset that things aren’t working because my homelab runs the house’s tech.
To test this, I fired up a Fedora 30 server edition VM and an isolated network where I would install Docker and make sure that was working and then try and install Podman at the same time and see if that would work. Why Fedora 30 when 31 is already out? Because another reason I’ve become interested in Podman is because Fedora 31 moves on to cgroupsv2 and Docker doesn’t support that yet. There’s a command that can be used to turn it off, but if I have any issues, I’d rather not have it be because of an extra variable. So Fedora 30 it is.
I went to Docker, downloaded their repo and installed via instructions here. And containers were up and running.
And I went through the config scripts. Here we are:
OK, now that I know I have a working Docker VM, it’s time to set up Podman. I’ve found one easy way to set it up AND have it working in Cockpit is to install the package cockpit-podman. This leads to the following Cockpit screen for Podman:
This is as far as I’d gotten on my production system. I was afraid that turning on the service would wreck Docker. ALSO, what service? I thought Podman ran without a daemon? As this page explains, it’s basically using systemd to do your container management. So I clicked on Start Podman.
I clicked around in my Docker PHP admin website and it still worked. So running Podman didn’t kill Docker. Huzzah! That’s great to know if you’re doing migrations. You can see that the images aren’t there. This is because Podman stores images in a different directory than Docker does. I’m going to try and create a pod with phpIPAM and MYSQL that can run on a different set of ports in parallel.
This would have put it into a pod named phpIPAM and saved /var/lib/mysql into the directory /root/phpipam-podman.
And if we do:
# podman pod ps
POD ID NAME STATUS CREATED # OF CONTAINERS INFRA ID
3040956968bd phpIPAM Running About a minute ago 2 ca94fe7c5a5e
I’m *slightly* concerned at this point that they both supposedly expose the same port, but I didn’t explicitly expose it to the box, so maybe it’ll be OK. Now let’s add the phpIPAM part to that pod. Unlike with the Docker example, I couldn’t run the exact command from Docker Hub because the –link command was unrecognized. I’m hoping that having them in the same pod mitigates that. Command was:
podman run -dt --pod phpIPAM -p 8080:80 -e MYSQL_ENV_MYSQL_ROOT_PASSWORD=my-secret-pw pierrecdn/phpipam
I notice that putting containers into pods takes slightly longer than starting up Docker containers.
I got the error:
Error: cannot set port bindings on an existing container network namespace
So maybe I needed to set the port when first creating the pod. A quick search on the net seemed to suggest this was true. I couldn’t figure out how to remove the pod because it complained about having containers inside it. So for now, since this is a VM I can just throw away, I’m going to make a phpIPAM2 pod.
# podman pod create --name phpIPAM2 -p 8080
# podman run -dt --pod phpIPAM2 -e MYSQL_ROOT_PASSWORD=my-secret-pw -v /root/phpipam-podman:/var/lib/mysql -d mysql:5.6
# podman run -dt --pod phpIPAM2 -e MYSQL_ENV_MYSQL_ROOT_PASSWORD=my-secret-pw pierrecdn/phpipam
# podman pod ps
POD ID NAME STATUS CREATED # OF CONTAINERS INFRA ID
5387ffc281ae phpIPAM2 Running 4 minutes ago 3 e2c6c36682a0
3040956968bd phpIPAM Running 18 minutes ago 3 ca94fe7c5a5e
But I wasn’t QUITE where I needed to be as you can see here:
A few things to note here:
It’s doing 8080->8080
There’s a “pause” container represented in here per each pod. But otherwise the Podman view in Cockpit is kind of unhelpful for creating pods. It is fine if you’re just doing things the Docker way.
At least it’s a decent list of your images.
OK, let’s try this ONE MORE TIME!
# podman pod create --name phpIPAM3 -p 8081:80
# podman run -dt --pod phpIPAM3 -e MYSQL_ROOT_PASSWORD=my-secret-pw -v /root/phpipam-podman:/var/lib/mysql -d mysql:5.6
# podman run -dt --pod phpIPAM3 -e MYSQL_ENV_MYSQL_ROOT_PASSWORD=my-secret-pw pierrecdn/phpipam
# podman pod ps
POD ID NAME STATUS CREATED # OF CONTAINERS INFRA ID
9a83c0ea0089 phpIPAM3 Running 2 minutes ago 3 ddc4589ba911
5387ffc281ae phpIPAM2 Running 11 minutes ago 3 e2c6c36682a0
3040956968bd phpIPAM Running 26 minutes ago 3 ca94fe7c5a5e
I’m closer, but it doesn’t answer on 8081.
Ah, it turns out that, unlike Docker, Podman does not punch hole through the firewall. I had to open up Port 8081. Now the only problem is that the mysql container in that pod exited. So now what? Hmm….this time it was SELinux causing problems. I couldn’t tried just a little to figure it out, but for the sake of getting things running and the fact that this VM is on on isolated network, I just turned SELinux off. (But it’s good to know that SELinux helps protect the system when using Podman). After all this it still wasn’t working because the phpIPAM container was complaining it couldn’t get to the SQL database. So I figured I’d try one more thing – name sure I use the “name” directive in the container creation.
And I’m pretty sure that should belong to the phpIPAM container, not the MySQL container. I wonder if creation order matters?
# podman pod create --name phpIPAM5 -p 8081:80
# podman run -dt --pod phpIPAM5 -e MYSQL_ENV_MYSQL_ROOT_PASSWORD=my-secret-pw pierrecdn/phpipam
# podman run -dt --pod phpIPAM5 -e MYSQL_ROOT_PASSWORD=my-secret-pw -v /root/phpipam-podman:/var/lib/mysql -d mysql:5.6
And then…….IT WORKED. IT FREAKIN’ WORKED! So the port that’s exposed should go to the first container you add. Hmm… could be annoying for complex pods.
Well, it didn’t 100% work because it’s expecting a certain name for the MySQL container and since we couldn’t provide the –link, that didn’t happen. But I could probably fix that by giving the MySQL container the right name. I’ll try that tomorrow. It’s been an interesting 2 hours already.
When I tried to upgrade the laptop a couple days ahead of the Tuesday release date, assuming that the sources were as good as gold at that point, the upgrade process complained about the Kdevelop Python plugin and didn’t want to proceed. I figured if this persisted past Tuesday I would just use it as an opportunity to try out PyCharm Community Edition. But once Tuesday came around I was able to upgrade to Fedora 31 with nary a problem. So that was probably the smoothest upgrade I’ve had since Fedora Core 1.
Back when Fedora 30 came out, I updated my laptop, but I left my main computer and the HTPC on Fedora 29. The former because I was busy with something at the time and didn’t want the disruption of an upgrade; the latter because the family depends on it for entertainment. However, with Fedora 31 coming out next Tuesday, the support window of Fedora 29 is over. The HTPC didn’t give any issues when I started the upgrade (at of this time it’s still running the upgrade), but my main computer did. This time it complaint about ripright and whois-mkpasswd.
I removed ripright and it didn’t seem to affect anything else. I also removed whois-mkpasswd. I can always try and reinstall them later. This allowed the download to proceed. Afterwards it was able to boot into Fedora 30 without any issues. whois-mkpasswd turned out to also be an issue when upgrading my server. When I checked in Fedora 30, ripright was not something that could be installed. whois-mkpasswd had reinstalled itself as part of the upgrade process. Looks like everything is running alright.
Just in time for Halloween I discovered the Dracula set of dark themes.They’ve got themes for nearly every code editor and shell/console program you can think of. Here’s Yakuake with the Dracula Konsole theme:
And here’s Kate with the Dracula theme:
I like the color scheme, but the font’s a bit small, so I might make a variant theme with a slightly larger font size.