Last year when I went to Red Hat Summit, I saw a lot of use of Satellite. I’d tried the 5.x series’ upstream Spacewalk and it didn’t quite work out for me. But this time I would try it out, gosh darnit! I mean, with the Katello plugin it would even include Pulp, which I’d been interested in trying out before because it could cache RPMs during an upgrade. So I’ve been messing with it here and there. However,I don’t use Puppet scripts (it’s like Chef or Ansible in principle) and I don’t have the need to provision new machines or VMs (especially when that’s already pretty easy with Cockpit and/or Virt-Manager). It’s already easier and works more consistently for me to keep track of whether my computers are up to date (and update them if not) with Cockpit. The RPM caching part of it was neat, but recently it stopped working consistently. Upgrades are VERY fragile and messing up on installing a plugin could bork the whole system. Also the necessary files – puppet, katello-agent, etc were always behind in providing packages for Fedora. Turns out it was a bunch of extra work and frustration just to keep track of my computers – and I was already doing that in Dokuwiki.
So, farewell Katello and Foreman. It was fun to play with your technology because I’m a big computer nerd and found that fun. But there’s only so much time for sys-admin-ing and so after I get everyone unsubscribed and back on their usual repos, I’ll be getting rid of you.
Because I have this VM registered to Katello (Foreman plugin) to receive updates (basically as a way of both keeping track of the computers and VMs on my network and also to have a GUI to pulp for caching RPMs), I had to deal with Katello-Agent. The latest RPM in the official Foreman/Katello repos is unfortunately for Fedora 29. That version of Fedora has been out of maintenance for a long time. Maybe Foreman (upstream for Satellite) is just used by most of their customers for RHEL sites that don’t have any Fedora nodes? So I did find this copr that provides updated versions: https://copr.fedorainfracloud.org/coprs/slaanesh/system-management/
To upgrade, first I had to go into Katello and remove it from the subscriptions. Then I had to run dnf repolist to get it to just look at the official Fedora and RPM Fusion repos. After that I ran the upgrade process. Once that concluded I was able to install Katello-agent and that brought in goferd and all the other packages I needed.
After that I just needed to subscribe it to Fedora 32 repos in Katello and everything was golden.
When it came to new dishes, April was all about bread. First, I made a no-knead bread with America’s Test Kitchen’s recipe.
It came out OK. I actually tried it again the following day to try and get a darker crust. The funny thing is that this is one of the easiest breads to make and yet it’s the one I’ve had the worst results with. The crumb wasn’t as open as it was supposed to be and for all the time it took to proof it was pretty meh.
My second new dish was Malasadas.
These are a Portuguese doughnut that made its way to Hawaii with colonists/explorers and it’s now most famous in the US as being a Hawaiian thing. (Funny, in all my 5 or so trips to Oahu, I never heard about them). Well, since my family on my mother’s side comes from Portugal, I found out that my great-grandmother used to make these for my mom as an after-school treat. I can see why – they were incredibly tasty – better than any doughnut I’ve ever had.
As I mentioned in my k3s on Ubuntu 2020.04 post, I really thought that Ubuntu 2020.04’s server install was prety slick. I’m used to text-only server installs looking like this:
Here’s a step-by-step collection of screenshots and my thoughts on each step of Ubuntu 2020.04’s server install:
Just starting off, with the language selection, you can see this isn’t the usual ugly ncurses install. It looks like a beautiful matte black.
Now, this right here is something I’ve never seen (that I can remember – maybe Arch or one of the other distros I looked at a long time ago do this?) and it’s something EVERY distro should do. For years now almost every distro has allowed you to install off the net instead of the CD/DVD/USB/ISO if you have the bandwidth. But this is the first time I’ve seen the ability to update the installer – important if the installer has some bugs (and I do remember in the past some Fedora installers having bugs and requiring me to get an updated ISO).
After that great bit of innovation, we’re back into familiar territory here, setting up the keyboard.
From there we move onto network connections. I’m just going to use this on the KVM NAT to test out server scenarios with Ubuntu. So I’ll just leave it on DHCP within that subnet. I skipped the proxy screen because I never use them and it’s pretty basic.
Most folks would leave this the same, but it’s possible a University or large institution would have their own Ubuntu mirror to just grab the packages once, rather than for each computer that needs upgrades.
Frankly this page looks exactly like most GUI installs, just with text selection instead of radio buttons and check-marks.
Interestingly, I think this is one of those places where the larger the institution, most likely the simpler this would be, with users connecting to some sort of NAS or other complicated storage while keeping the front-end servers relatively simple.
This was the screen I found the most interesting as it is the one that diverges the most from Red Hat/Centos/Fedora. First of all, there’s never a root password set. Second, under server’s name – it allows me to set what would be localhost, but doesn’t allow me to enter a localdomain.
This is another radical departure and another one that I like. Well, I think the idea of a server without OpenSSH installed is very weird. BUT! I do like the ability to import an SSH identity for a potentially more secure login. Typically, in my experience, the Red Hat-based distros will have OpenSSH installed by default, but will not have the service enabled or started.
When I first installed Ubuntu Server 2020.04, I was already awed by the slick-looking install and updating the installer when I got to this screen. THIS. IS. A. GREAT. FEATURE. Now, maybe Ubuntu Server is more likely to be installed by your Average Joe who got snared into Linux via Plex (as is often mentioned on /r/homelab). One of these persons will show up to /r/homelab or /r/selfhosted once or twice a week to ask what else they should host on a Linux server. This list is a great example of what everyone else is running. The fact that they have sabnzbd on there makes me think this must be from raw numbers, not something Canonical is promoting. SO maybe the engineer installing RHEL doesn’t need this list because if you’re installing RHEL you’re doing it at work and you already know what you need. But I think Centos and Fedora should really consider adopting something like this during the package selection.
And that’s the Ubuntu 2020.04 server installer. A lot of the usual install prompts, but a few innovative ones that I want all the other distros to “steal” for their installers.
Clearly there’s a lot I don’t get about Kubernetes and I didn’t install a GUI in that VM so I can’t use the dashboard (which can only be viewed at localhost – or so the instructions seem to indicate) So I decided to go back to basics and look at the Hello Minikube tutorial, but run it in my k3s VM.
So I think this is the first part of why I was having problems yesterday with the pod I created from Podman. A lot of the commands I saw online implied a deployment, but I hadn’t created one. This is evidenced by:
kubectl get deployments NAME READY UP-TO-DATE AVAILABLE AGE hello-node 1/1 1 1 3m25s
While pods showed:
kubectl get pods
NAME READY STATUS RESTARTS AGE
miniflux 0/2 CrashLoopBackOff 357 16h
hello-node-7bf657c596-2wc2j 1/1 Running 0 4m2s
So perhaps one of the things I need to do is figure out how to put a pod into a deployment. The next command they have you run is pretty useful:
kubectl get events LAST SEEN TYPE REASON OBJECT MESSAGE 27m Normal Pulled pod/miniflux Successfully pulled image "docker.io/miniflux/miniflux:latest" 7m12s Warning BackOff pod/miniflux Back-off restarting failed container 5m27s Normal ScalingReplicaSet deployment/hello-node Scaled up replica set hello-node-7bf657c596 to 1 5m26s Normal SuccessfulCreate replicaset/hello-node-7bf657c596 Created pod: hello-node-7bf657c596-2wc2j Normal Scheduled pod/hello-node-7bf657c596-2wc2j Successfully assigned default/hello-node-7bf657c596-2wc2j to k3s 5m21s Normal Pulling pod/hello-node-7bf657c596-2wc2j Pulling image "k8s.gcr.io/echoserver:1.4" 4m14s Normal Pulled pod/hello-node-7bf657c596-2wc2j Successfully pulled image "k8s.gcr.io/echoserver:1.4" 4m8s Normal Created pod/hello-node-7bf657c596-2wc2j Created container echoserver 4m7s Normal Started pod/hello-node-7bf657c596-2wc2j Started container echoserver 2m13s Warning BackOff pod/miniflux Back-off restarting failed container
Although on busy server I could see it getting overwhelming – hence OpenShift and other solutions to manage some of those things for you.
I’m still left uncertain of what I need to do to get things working. That said, for now, I think I’m just going to stick to Podman pods rather than the complexities of k3s. I don’t quite have the resources at the moment to run OpenShift, although perhaps I’ll give that another shot. (Last time I ran Minishift with OKD 3 it seemed to want to bring my computer to a crawl)
As I’ve been working on learning server tech, I’ve gone from virtualization to Docker containers and now Podman containers and Podman pods. The pod in Podman comes from a view towards Kubernetes. I moved to Podman because of the cgroupsv2 issue in Fedora 31 and so I figured why not think about going all the way and checking out Kubernetes? Kubernetes is often stylized as k8s and a few months back I found k3s, a lightweight Kubernetes distro that’s meant to work on edge devices (including Raspberry Pis!). For some reason (that I don’t seem to find on the main k3s site), I got it in my head that it was better tailored to Ubuntu than Red Hat, so I decided to also take Ubuntu Server 2020.04 for a spin.
While one of my cloud servers runs Ubuntu, I didn’t have to install it. I just spun it up at my provider. So it’s been a long time since I did an Ubuntu installation. I think the newest ISO I had before 2020 was one of the 2016 Ubuntu ISOs. The server install is VERY slick. Slickest non-GUI installed I’ve ever seen. I’ll have to do a future post about it. I liked that it detected a more up to date installer during the install and offered to download and use THAT installer – negating any potential installer bugs. One of the most interesting parts of the install was when it asked if I wanted to install some of the more popular server apps. The list was quick eclectic and must be from the popularity tool because it even had Sanzbd and I can’t imagine Canonical pushing that on its own.
One thing I *am* used to from my Ubuntu cloud server that I loved seeing here is all the great information you get upon login. I wish CentOS or Red Hat would do something similar.
I decided to go ahead with the k3s’ front page instructions under “this won’t take long”:
curl -sfL https://get.k3s.io | sh -
# Check for Ready node, takes maybe 30 seconds
k3s kubectl get node
After a bit, I didn’t time if it was 30 seconds, I got back:
NAME STATUS ROLES AGE VERSION k3s Ready master 95s v1.18.2+k3s1
OK, looks like I have some Kubernetes ready to rock. I figured the easiest container program I’m currently running in Podman would be Miniflux. I already created a yaml file with:
podman generate kube (name of pod) > (filename).yaml
That command generates the Podman equivalent of a docker-compose.yml file. You can use that yaml to recreate that pod on any other computer. The top of the file it generates says:
# Save the output of this file and use kubectl create -f to import
# it into Kubernetes.
So I’d like to try that and see what happens. Of course, first I have to recreate the same folder structure; in that yaml I’m using a folder to store the data so that it’s easier for me to make backups than if I had to mount the directory via podman commands.
After creating the folder, I transferred over the yaml file. Then I tried the kubectrl create -f command.
sudo kubectl create -f miniflux.yaml
I waited for the system to do something. Eventually I got back the feedback:
Being new to true Kubernetes (as opposed to just Podman pods), I wasn’t sure what to do with this information. But I was happy that it hadn’t simply failed. Taking a look at the documentation for k8s, I learned about the command kubectl get. So I tried
kubectl get pods
NAME READY STATUS RESTARTS AGE
miniflux 0/2 CrashLoopBackOff 75 3h7m
Welp! That doesn’t look good.
Following along on the tutorial I typed
sudo kubectl describe pods
This gave a bunch of info that reminds me of a docker or podman info command. But the key to what was going on was at the end:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Started 33m (x34 over 3h8m) kubelet, k3s Started container minifluxgo
Normal Pulled 28m (x34 over 3h7m) kubelet, k3s Successfully pulled image "docker.io/library/postgres:latest"
Warning BackOff 13m (x739 over 3h5m) kubelet, k3s Back-off restarting failed container
Warning BackOff 3m40s (x772 over 3h5m) kubelet, k3s Back-off restarting failed container
That’s because the pod had more than one container. Turns out the issue was with the database. It’s complaining about being started as the root user.
(Sidenote: awesomely Ubuntu Server Edition comes with Tmux pre-installed!)
Strangely there doesn’t seem to be a way to restart a pod. The consensus seems to be that you use:
kubectl scale deployment <> --replicas=0 -n service
So I will try that. Apparently that doesn’t work when you create it from podman yamls.
Eventually I decided to try and get underneath k3s. So the replacement for docker or podman in k3s is crictl.
Showed my containers. I thought I had maybe fixed what was wrong with the database. So I tried
ctrctl start (and the contaierid)
Apparently it doesn’t want to do that because the container is in an exited status. This whole thing is so counter-intuitive coming from Docker/Podman land. Then again, when I went to try again, it had switched container IDs because k8s had tried to restart it. So it truly is ephemeral.
Well, that’s all I could figure out over the course of a few hours. I’ll be back with a part 2 if I figure this out.
Yesterday I mentioned some issues with my Ortek MCE VRC-1100 remote and certain buttons not working. Figured out that in addition to removing the XF…. entries in dconf, also have to remove them in gsettings. Specifically, I had to use the commands:
gsettings set org.gnome.settings-daemon.plugins.media-keys stop-static [”]
gsettings set org.gnome.settings-daemon.plugins.media-keys play-static [”]
gsettings set org.gnome.settings-daemon.plugins.media-keys pause-static [”]
After that, everything was working as it should. So far no negatives to using Fedora Silverblue as our HTPC. We’ll see if that changes as I try to get Lutris to launch some Wine games.
One thing to know about Silverblue is that it’s a Gnome environment. I was already running Gnome for the HTPC, but I prefer KDE for my computers usually. When I was installing Silverblue there was no option to go for KDE or anything else. On Silverblue you install via Flatpaks. Any regular installs (ostree instead of rpms) also requires a reboot.
A few things to note based on getting Kodi setup:
/media is actually at /run/media (but there is a symlink)
normally if you need to add stuff to the userdata folder for Kodi (advancedsettings.xml for using mysql, etc) the path is $HOME/.kodi/userdata . With Fedora Silverblue (and using Kodi Flatpak), it’s at $HOME/.var/app/tv.kodi.Kodi/data/userdata
And finally, $HOME is atually at /var/home/username (but there’s a symlink)
Right now I’m trying to figure out how to get my Ortek MCE remote to work with Kodi (at least for Play, Pause, and Stop). If I figure that out, I’ll definitely post here.
Originally I was going to mess around with Silverblue in a VM before considering using it on my HTPC. In theory it sounded like it would work very well – an immutable, rollback-able OS seems like the perfect thing for the one computer that ALWAYS needs to work for less tech-savvy folks in the house. But the first release of Silverblue seemed to still be a bit rough around the edges. Lots of recent blog posts on Fedora Planet (a blog aggregator for folks who participate in the Fedora project) seemed to indicate that things were in a better place now for Silverblue. Still, I was going to first mess around in a VM. But then I had to reboot the computer after things went awry with the display and this time I wasn’t able to get around the Free Magic issue that had been plaguing me for a few months now since upgrading to Fedora 31 (in anticipation of Fedora 30 being out of the support window). The Free Magic issue basically would appear after the grub menu and while others’ reports on bugzilla seemed to indicate that a kernel upgrade fixed it for them, such was not the case for me. For a while it worked such that if I was there on reboot and hit enter on the grub entry, it would work (while it would fail if you left it to boot on its own). But tonight it would not yield. The computer had gone catatonic. So, I figured it was as good a time as any to try and move to Silverblue. As a bonus, I was going to move the installation to an SSD, so I still have the old, borked installation if things go completely wrong (although I’d still need to fix the Free Magic issue).
I’m installing the base system as I write this. Future posts will document getting blueyoshi (name of that computer) back up to a working condition as the HTCP.
As things continue to happen in the commercial IoT space like Wink switching to requiring subscription fees, I continue to feel happy that I’m creating my own Internet of Things solutions rather than relying on commercial vendors who can decide to disappear or suddenly start charging fees. The cost for me is that things go at a slower pace and, obviously, don’t have sleek packaging. I think I can live with that.
Raspberry Pi B (1st Gen)
Because I don’t have that much disposable income for my projects, I prefer for my hardware projects to solve a problem for me. In this case, one of the problems I’ve had is leaving the garage door open and not realizing it. Mostly this happens when something distracts me (usually a parenting “emergency”) and breaks up the routine where I check the garage door after putting away all the kids’ outdoor toys. So, after finding someone who’d done a similar project with a first gen Raspbery Pi, I decided to code up my own solution. As of now I’ve got the system interfacing with Home Assistant to let me know at a glance if the garage door is open (great if I’m on any floor other than the first floor) and also have set up an integration for HA to let me know if the door is open after sunset. Additionally, I have the status pushed to my Matrix instance. There are a few more tweaks I’d like to make, both to make it more useful for folks who aren’t me, and just to get it to a perfect level of reliability (right now a REALLY strong wind can cause a blip where the doors seems to open and then close).
Arduino MKR WiFi 1010 and ENV Shield
Once again, this project is about helping me out with my shortcomings. All too often I would end up getting into the shower and forgetting to turn on the fan. Since I like to shower with water just this side of first degree burns, this isn’t so good for the bathroom environment which can reach into the realm of 90% humidity. So I built a sensor with the Arduino MKR WiFi 1010 and the ENV Shield. It measures humidity and when it reaches a certain threshold, it tells Home Assistant to turn on the bathroom fan. It also reports back the temperature and the light intensity. The mini-breadboard you see in the above image was something I added to try and do some hardware debugging. It did help me realize quite a few things I needed to change in my code. It also seems to keep the ESP32 chip from locking up. I’m not sure if that’s because having some load (from the LEDs) actually does something or if it’s just coincidence. But after adding the breadboard, the system went from a few days between lockups to going weeks before needing to be power cycled.
While I did a lot of cooking in March, I only made one new dish – Brown Sugar cookies. I’d had regular sugar cookies my entire life. I’m pretty sure this is the first time I had sugar cookies made with brown sugar instead. The brown sugar definitely took these cookies to a whole other taste realm where the molasses in the sugar added another dimension to the taste. I’m not saying it’s supplanted [white] sugar cookies in my heart, but that there’s a place for each of them. If the kids ate more cookies or if I didn’t care about my heart health (most cookies have a LOT of butter) I’d make these a lot more often.
Recently upgraded my server to Fedora 31 as the Fedora 30 support window had closed. All I had to do was disable the bat Modular Repo. It wasn’t obvious I needed to do this at first, but I found a bugzilla that covered it. Then everything proceeded.
I also updated my main laptop to Fedora 32; it’s always my first upgrade since it’s not my main machine. That one required a few modular repos to be disabled as well as a bunch of conflicts from Python 2 packages. By using dnf’s –auto-erase (or whatever the command actually is), everything proceeded and seems to be running just fine. I was a little worried at first with the warning about coming back from a locked screen in KDE, but I decided I could live with it on the laptop. So far, either the issue doesn’t affect my laptop or I haven’t triggered the conditions.
With how well things went, I’ll probably upgrade my main machine in a month or so – mostly so I can get access to Python 3.8. If that ends up coming to Fedora 31, then I may wait for the Fedora 33 upgrade and just do a 2-level upgrade. Somewhere in between I’ll probably upgrade my VMs – one for MythTV and one for building RPMs (which I set up so that I wouldn’t have to install so many gosh-darned packages on my main machine – which makes upgrades take forever).
Being a male, who’s racially white, I never had any trouble with finding representation on TV. This hasn’t always been the case for everyone, although it’s only recently (last 5ish years or so) that folks have begun to speak out on how important representation is. When you rarely see yourself in media, I’ve been told, you feel left out by the culture. Disney started rectifying this in its Disney Junior line; first with Doc McStuffins.
Doc McStuffins involves a small African-American girl who is a doctor to her stuffed animals that she can bring to life. The show is pretty awesome on quite a few levels. First of all, when most young kids’ TV shows involved male protagonists (this is changing as you’ll see with this blog post), Doc McStuffins was pushing the boundaries by having a female protagonist. Second, Doc’s mother is a doctor. Third, her father is a stay-at-home dad, a trend more families are following and providing an example for children of color. The story lines emphasize using logic to solve problems as well as a general theme of reducing doctor anxiety – somethings kids of all backgrounds can appreciate.
Next, Disney released Elena of Avalor. It’s for a slightly older kid set, but what’s most groundbreaking about the show is that it’s an action show with not one, but three female main characters. For the longest time, the Hollywood consensus was that girls were not interested in action shows and that boys wouldn’t watch shows with female protagonists. Elena lives in a fictional country, Avalor, but it’s clearly modeled after a mix of MesoAmerican and Mexican cultures. There’s even an episode that has this MesoAmerican game that’s kind of like basketball meets soccer:
It also has an equivalent to The Day of the Dead and Christmas-like celebrations that incorporate Mexican themes. The show also excels at the female relationships – both Elena and her little sister and Elena and her best friend. In fact, in most of the episodes I’ve watched with the kids, there tends to be a role-reversal with the two male leads competing for Elena’s affections.
This brings us to the most recent cartoon in this progression, Mira, Royal Detective. This has quickly become a favorite of youngest because of the addictive music in the show. This show is targeted towards the Indian-American (as in your parents are from the Asian subcontinent) kids. The opening theme song has a Bollywood-style opening:
And every episode as Mira goes on the search for a clue, she does a Bhangra-ish little dance with her moongooses.
I’m much less familiar with Indian culture, so I can’t speak as authoritatively to how faithful it is, but there are definitely mentions of food like samosas, there’s Indian architecture, and Disney has done a great job on the casting. There are a good amount of South Asians in Hollywood, so there was no reason not to cast them for the voice acting. Mira, Royal Detective features Feida Pinto, Kal Penn, Jameela Jamil, and Aasif Mandvi as well as some other names I don’t recognize.
Given how important it is for kids to see themselves in TV characters to feel like they’re part of the culture, I think it’s pretty awesome of Disney to be doing this. Of course, it also exposes other kids to these cultures, which hopefully makes them seem a little less weird and different. (And, in the part of Maryland where I live – the South Asian population is pretty large so it’s good to have preschool kids exposed to these differences before they get to school) And, just because Disney probably did it to attract more spending dollars from wider sections of the American public doesn’t make it bad. Sometimes the results matter more than the incentives.
A little over a year ago, I put CentOS 7 on my Acer Aspire One. We had no idea when RHEL 8 was coming out (turns out just a few months later when I was at the Red Hat Summit, it was the release party for RHEL 8), so 7 went on there. And at Red Hat Summit I learned that, while running CentOS 7, suspend worked on that netbook. However, it was already pretty old by the time I put it on the netbook and it was missing certain libraries and had old versions of libraries like Go so I couldn’t do something like install Weechat-Matrix on there.
Now that CentOS 8 has been out for a while (I think part of the delay came from setting up CentOS 8-Stream, I decided to put CentOS 8 on my netbook. There isn’t any supported upgrade path (unlike Fedora), which is a bummer. However, this netbook is just used for light travel and as a backup at conferences in case my main laptop doesn’t work. So I didn’t mind blowing it away.
When I put in the USB stick with the install ISO, everything worked fine. However, from the minimal install, it did not install NetworkManager-wifi (which it probably would have done with the Desktop/Laptop install, but I really didn’t want to have any more packages than I needed). So if you find yourself installing CentOS 8 on a laptop where the WiFi works during the installation, but not after installation, you just need to plug in an ethernet cable and install NetworkManager-wifi. After a reboot, everything worked perfectly, network-wise. I was also pleased to see nmtui installed as I prefer it to nmcli most of the time. Usually it’s the first thing I install on a CentOs install.
On the unfortunate side, although there are EPEL packages for CentOS 7 for weechat and i3 (the window manager), they don’t exist for CentOS 8. Perhaps it’s a matter of time or perhaps someone needs to step up to make those packages.
OK, that’s it for now. I’ll report back if there’s anything astonishingly great or bad about running CentOS 8 on my netbook.