Raspberry Pi Zero W for new Projects

The next project I wanted to work on was to see if maybe my environment monitoring might be slightly more reliable with a Raspberry Pi than with an Arduino. So I wanted to do some comparisons. For my bathroom IoT project, I am using:

That’s a total of $74 before taxes and shipping. To get the same measurements on the Pi platform I went with:

That’s a total of $66.4 before taxes and shipping. In a way, that’s pretty incredible because the Raspberry Pi kicks the Arduino’s pants off when it comes to computing power specs. Also, the Pimoroni Env Hat has a screen to display whatever you want – on the product page it shows a graph of the data being measured. It also has a button. It’s also basically the same size:

Raspberry Pi Zero W size comparison with Arduino MKR Wifi 1010
Raspberry Pi Zero W size comparison with Arduino MKR Wifi 1010

In fact, this whole endeavor has really made me think a lot about how to decide which boards to use for various projects. On the one hand, the Raspberry Pi (any version or revision) is going to be easiest to program and debug. On the programming end, you can use pretty much any programming language you are familiar with – from Python to Rust to Go. On the debugging end, you can plug it into a monitor via HDMI. You can SSH into it (as long as it’s connected to Wifi). If you’ve designed your project to do so, you can check log files. It’s got a lot more RAM and it’s got as much storage space as you want to add via an SD card. Arduino, on the other hand, needs to be programmed in its C variant. (Although Adafruit’s boards – and many others – can run Circuit Python) For debugging, I had to add on a breadboard with some LEDs so I could tell what was going on. Because sometimes plugging it into a laptop to debug is either impractical (vs using SSH with a Raspberry Pi) or sometimes changes enough about what’s going on to mask the issue. And, as you can see above, when you’re looking at a Raspberry Pi Zero or Zero W, you’re looking at the same footprint (that is to say that they take up the same space in your project)

Raspberry Pi Zero W

Of course, there are reasons people use Arduinos and Adafruit Feathers (and other boards). For starters, while debugging is easier with a Raspberry Pi (it’s running a whole freakin’ Linux disto), it’s also harder with a Raspberry Pi (because it’s running a whole freakin’ Linux distro!). To give an example, I bought a Raspberry Pi 4 for my daughter to use as a jukebox running Mopidy. After doing an update of the system, something changed in the libraries and the way they handled outputs that made it stop working. It was a week of work to try and figure out what went wrong. (And I wouldn’t have figured it out without help on the Mopidy forums) By contrast, an Arduino is (to simplify a bit) ONLY running your code. With the exception of firmware upgrades for various chips, there’s nothing to update. If you code works now, it will always work unless you change something about your code. Speaking of always working – that brings me to one of the reasons I wanted to write this post; sometimes the Raspberry Pi can be quite fragile. I didn’t have a 5V 2.5A power supply available (the minimum recommended by the Raspberry Pi foundation). So I used a 5V 2A charger. When I did a shutdown, it needed a little more power and so the card became corrupted and I had to reflash it. At least, I was able to switch to Raspbian Lite which dumps you straight to the console. No need to waste space and RAM on an OS desktop if it’s going to just be measuring environmental data. Back to reasons you might use an Arduino, Feather, etc – they’re generally referred to as prototype boards. Because they’re so simple, if you end up using them to build something useful, you can then use that to get a single board printed with exactly the chips and connections you’re using from the Arduino board – and that’s a pretty powerful preposition. Also, generally speaking, a power outage is not going to screw up your program as it can cause corruption on Raspberry Pi from not shutting down correctly.

Going forward, I’m not 100% sure what I’m going to choose, but I imagine it will depend on a variety of features. Do I need it to survive power failures? Is it a one-off or something I might want to duplicate? Do I want to challenge myself with C(ish) code or just do something easy with Python? How much money do I have for my build? And do I care about needing to update libraries to keep my network safe?

I don’t get as many comments on my posts nowadays – partly because the internet culture has shifted to commenting on Reddit, Facebook, Hackaday, etc, but if you are a fellow maker, I’d love to hear your decision-making process when trying to select a board for a project.

Vivaldi On Windows Part 1

This is the first post continuing my exploration of web browsers outside of Firefox and Google Chrome. You can read the introduction here.

Running Vivaldi for the first time.

For the first browser I wanted to check out on Windows, I decided to check out Vivaldi. My thought process is that I’m most likely to end up with Brave, so better to save that one for last. But as I went through the first-run process in Vivaldi and saw the nice polish the browser seems to have, it really started tugging on me, saying, “Are you sure you wouldn’t want to just stay with Vivaldi?” For this first post, I’d like to cover the first-run process and then a little video poking around the interface. This’ll be followed up in a while with any impressions I’ve come away from my usage of Vivaldi on Windows.

Vivaldi asks if you want to import bookmarks and settings from previous browsers.

During import settings, it appears to just list every major browser (and Vivaldi) regardless of whether you have it on the system. (Or maybe I’ve installed Chromium in the past and the config files are in the Windows equivalent of my home directory?)

Deciding if you want to enable ad and/or tracker blocking

I found this to be a very interest settings page. When they ask you about blocking trackers and ads, who the eff chooses not to even block trackers?!? Of course, I understand not blocking ads so that sites can get paid. But I don’t understand who *wants* to get tracked (other than researchers) I’m going to block everything and then see if sites later ask me to whitelist them – that would give me some insight into what the whitelist tool looks like. 

Selecting the color scheme

When I installed Vivaldi on my Linux computer I decided to use tabs on the side. On the one hand, one of my Windows monitors is a 2k monitor. So it’s OK to have tabs on top – I would imagine most sites are expecting users to have 1080 or lower resolution. On the other hand, it IS widescreen and many websites don’t actually take up lots of horizontal space. So why not go with a left or right tab structure (especially since I’m doing my Windows taskbar on the bottom). Since they already have a bar on the left (which I will explore in the video below), I decided to put my tabs on the right to not end up clicking on their other features by accident. 

Speaking of those features, I know it might be kind of passe at this point to have bookmarks – maybe geeks/nerds just leave tabs open forever. I still use bookmarks here and there. So I like that it’s a button on the left in Vivaldi rather than being somewhat hidden in Chrome.

Finally, some tutorials and other helpful bits

I think it’s neat at the end that they offer up a link to video tutorials. Sometimes videos work better than words as you’ll see below. . 

Interesting initial speed dial. These plus Bing as the default search are all paid positions.  That is to say, Amazon, Walmart, and the others are all paying Vivaldi some money to be there when you first install it and until you accumulate some automated speed dial entries.

Notes feature and also showing how some websites don’t use the full width

The Notes Manager feature could make this an excellent browser for students in K-12 and College classes. Also the note-taking screenshot gives an example of how many websites are not formatted to take advantage of a widescreen view; thus it’s fine to have the tabs on the side.

Now, here’s a video where I explore the various buttons all over the UI:

Notes after using for just a few days:

So far I have had one incidence where listing the tabs on the side (instead of the top) didn’t work well. I had to reduce the tab bar on the right to the skinniest size for the YouTube upload to show the entire UI when taking up half of a 1080 screen. Why did I have it squashed to just half the screen? So that I could have the files I was dragging in to upload on the left side.

Having just learned of the existence of speeddial groups, I am already jealous that I don’t have speed dial groups at work. IT is very strict about which browsers we can have installed.

Are Web Browsers Getting Exciting Again?

It’s been a while since I last considered web browsers. I wrote this post in 2008 about which browsers I was using. And in 2011 I wrote this post about KDE Browsers. So that’s at least 9 years since I wrote about browsers. What is my current situation?

Well, on Linux I bounced back and forth between Firefox and Chrome, depending on which one was getting better performance. At this point, for what I do, Firefox is the winner for me. I use it on my laptop and desktop and it gets things done without getting in my way. I don’t necessarily have the most modern GUI setup because it tends to keep your GUI settings as you upgrade. This is what it looks like:

Firefox 76 on Fedora

I’ve set a dark color scheme to match my dark scheme on KDE.

I don’t use Chromium, but I have it installed. I use Falkon sometimes to segregate browsing and reduce cross-tab scripting malware. For Chrome, it actually now takes FOREVER to start up on my computer (relative to Firefox, Falkon, or non-browsers) so I rarely use it.

Over on Windows I use Chrome. I think for a little bit Firefox had fallen behind in terms of performance on Windows and I haven’t really seen a reason to change my defaults.

I put out a call on Reddit to see if there were other browsers worth checking out. I heard about some browsers that I’d already seen bandied about (Brave) and some I’d never heard of (qutebrowser) and a few others. So I think for the first time in a decade, I’m excited about the browser space again. (Also, the old guard continues to innovate. This morning I read about a new, neat type of link that Google Chrome is pioneering. And the Mozilla foundation has a bunch of neat R&D projects like WebVR and the web of things.) The browsers I intend to check out (and maybe write a post or two about) are:

  • Brave – This browser has me excited for a number of reasons. It’s co-created by Eich, who created Javascript (in a week, if I remember correctly) and who co-founded Mozilla. Yes, it’s based on Chromium (just like Oracle and Red Hat it’s odd to be competing with the same code base if you’re not equally contributing), but they seem to really be making a large effort to find the true balance needed for the web when it comes to advertising. Blocking all ads could kill off most professional content and drive it behind a paywall. Allowing all ads opens you up to privacy violations. But Brave tries to fix this by replacing tracking ads with non-tracking ads. There’s a lot of room for things to go wrong and for that to create some bad incentives, but on the surface it seems awesome. This *might* be the browser I move to on Windows. (And maybe also on Linux) Also, their private browsing mode (can?) include Tor.
  • qutebrowser – “everyone” loves Vim. Nearly every editor has a Vim-compatibility mode. This browser takes that to the extreme. It’s a keyboard-based driver (perhaps it should be the default on the Rat Poison Window Manager) that uses a lot of Vim shortcuts – like hjkl for moving around on a page and : (colon) to give various commands to the browser. On the negative side, you have to completely relearn all your browser keyboard shortcuts. On the positive side, you have almost no need for any keyboard usage (there’s even a shortcut that gives you a shortcut letter to a link so you don’t have to go link-by-link as I’ve had to do in the past with a commandline browser) so it could be extra great for RSI sufferers who are trying to limit their mouse usage. There’s also probably a pretty bit productivity gain once you get used to the shortcuts because your hands never have to leave the keyboard.
  • Vivaldi – If you go through my old browser posts you’ll see that there was a chunk of time where I really loved Opera and ran it as my main browser on Windows. Vivaldi is the spiritual successor to Opera after the company went from being Scandinavian-owned to Chinese-owned. A bunch of the hackers went off to start Vivaldi. This, like Brave, is another Chromium-based browser. But they bring Opera-level customization and features to the browser to transform it into something unrecognizable for sharing the same code under the hood as Chrome. Of course, this makes it similar to qutebrowser in the sense that you really have to go all-in to truly comprehend and use all the features. (Just go to that front page and scroll through the features. As of today there are 15 features touted on the home page) This is why, like the original Opera, those who love Vivaldi LOVE Vivaldi. Once you get to that level, it’s hard going back to a plain browser. I’ve played with it a little, but haven’t really given it a proper spin. (Partially because when I first installed it onto Fedora it was still in Beta). I could see this possibly becoming my daily driver on Windows and maybe on Linux.
  • Microsoft Edge?!? – obviously this would just be on Windows. As you can see in the below image, no Edge for Linux. (yet?) So, since Firefox I never liked any iteration of Internet Explorer even as they tried and tried to make it not be so crufty. When they first introduced Edge, I tried it a couple times because it was the default on a computer. I didn’t care for it. But now it’s based on Chromium AND it seems to be getting rave reviews online. Could that be paid content? Perhaps. Nothing surprises me on the web anymore. But perhaps it’s worth checking out…

So, there we go, a trio of browsers I’m interested in and why I believe the browser space is getting to be a lot of fun even if we’re mostly down to either a Chromium or a Firefox (is their engine still Gecko? Or would I call the Chromium equivalent Aurora?) backend. (Note: qutebrowser uses QtWebEngine – not sure if that shares any DNA with Chromium) In some ways, browsers have become like Linux distros. They’ve all got the same underlying packages – gcc, KDE, etc – but what they bring to the table is the UI and how polished the final product is. It’s not a perfect metaphor because Google controls both the package equivalent (Chromium) and the distro equivalent (Chrome), but there’s still room for improvement without the stagnation we saw when Internet Explorer had a 90% market share. (At least not yet. We’ll see what happens with Google if Brave, Ungoogled Chrome, and other potentially revenue-sapping browsers catch on)

On Windows I could easily see myself ending up on Brave. I’m already using Chrome, so why not a Chrome that’s less tracking? Also, my browser on Windows is of minimal importance these days since my Windows computer is just my video game machine / video game streaming machine. 99% of my Windows web browsing is to Youtube to upload my “Let’s Play” videos. Or maybe I could once again go the Opera-route and do Vivaldi. Maybe even the unthinkable (for 10 years-ago-Eric) and go back to Microsoft with Edge.

On Linux I’m probably more likely to stay with Firefox. But I could maybe see myself being tempted by qutebrowser or Vivaldi. Brave is a distant possibility, but maybe it will convince me.

Yesterday and Today’s Programming: Scratch

Stella’s Project

Because last week was busy with house projects, this week I continued the ocean/water-themed programming from the Raspberry Pi Foundation. The first project was a game I made with Stella (her first computer game creation), a boat race in Scratch. While we mostly stuck to the tutorial, we did partake in the challenges, including adding a shark and figuring how to have background music (which Stella chose on her own). That turned out to be really tough as my attempts to figure out how to use the loops led to stuttering. Eventually I figured it out and you can see what I did, the rest of the code, and how the game plays by watching the video below:

Scarlett’s Project

Of course, it turns out that if I had done Scarlett’s project first, I would have learned how to play music in the background of a scratch game. Scarlett’s Scratch project was to make a synchronized swim for Scratch the Cat. Scarlett’s customization was to chose the music as well as the ability to choose how many cats were in the pool for the swim. Check out the video below to see the code as well as what the final project looked like:

Today’s Programming: Scratch and Python

Scratch

Starting off with this code from the Raspberry Pi foundation, Sam made his first ever video game. I then modified it so the shark would close his mouth when he eats the fish and added a sound at the end when a score of 5 was reached. Here’s a video of Sam playing his game:

Python

For Python I continued to work on writing unit tests for my Extra Life Donation Tracker. Doing so helped me find a few bugs and a few functions that I was able to combine to reduce the code complexity. Overall, this has been a very productive phase for the project, even if it has been very frustrating figuring out how to mock out certain functions like those reaching out to the net or those writing to disk. Depending on what I can do with some help that someone offered me on reddit, I will either go back to the extralife_io.py file to try and finish that one up or I may declare unit testing completed for now and tackle that later when I’ve learned even more about how Python works and how to test it.

Last Three Days of Programming: Python

I’ve mostly been working on version 5.0 of my Extra Life Donation Tracker. Since I adopted the Semantic Versioning principle for the project, an API change means a major version change. I’ve been taking everything I’ve learned about Python programming from 2019 and 2020 and tried to make my code both more Pythonic and more sophisticated. I’m also trying to move towards 100% code coverage. That is to say, I’m looking to try and make sure every line of code is covered by a test. While 100% test coverage doesn’t guarantee perfect code (after all, the test itself could be flawed or I might not be considering corner cases), striving for it usually has a few benefits:

  • by thinking about how you will test the code, you have to think about how the code works. This sometimes reveals errors.
  • thinking about what you will test sometimes leads you to consider corner cases you hadn’t considered were there
  • well-written tests help when dealing with bug fixes, new features, and refactors by helping to prevent/reduce introduction of new bugs

Getting to 100% coverage has been pretty hard and I’m currently struggling with how to force my code to cause errors that will allow me to make sure I’m catching errors correctly. I expect getting to 100% coverage will take the rest of today and probably go into next week. After that, I’ll be adding MyPy coverage (trying to make sure I’ve got type annotations throughout the code), using the “rich” module to generate better user output on the commandline, some tidying up, and then providing some new output for team-focused streamers.

Today’s Programming: Ruby and Python

I don’t know how long I intend to keep doing this, but I decided I wanted to document my programming as I went along. So yesterday I worked on Scratch and here’s today’s entry.

Ruby

A while ago I got a bunch of kids’ programming books in a Humble Bundle. I tried showing Ruby to my oldest, but I did it one year too soon (she wasn’t yet reading as well as she is today and couldn’t type as well as she can today) so for now she’s not into programming. But I was curious to see how it was presented since the book uses a story to present it (quite different from the Python book in the same bundle). Went through chapter 2 today and, so far, it seems that Ruby is pretty readable like Python is. That said, I’m not sure puts makes more sense than print, but maybe if I delve into the history of Ruby, I’ll understand why it’s puts? The author of this book uses snake case for variable names. I wonder if that’s because it’s the Ruby standard to use snake case instead of camel case or just to make it easier for the kids following along. I *did* really like the built in next and pred methods on numbers. Definitely more readable than a var++ or var = var + 1. Or rather, if you don’t have decades of programming experience (as I do), I think it’s just a faster bit of cognition to see var.next and understand it vs the older ways of doing the same thing.

Python

Snowflakes with Sam

Snowflake wreath

Yesterday I did a bit of Scratch art with Stella. Today Samuel and I used Python’s implementation of the Turtle module (as in the Turtle programming language that was the Scratch of its day – you can get raw Turtle on Linux with the KTurtle program) to draw snowflakes, again following a Raspberry Pi tutorial. Above was the final result of what we came up with. The tutorial challenge at the end was to make snowflakes all over the place in different sizes. Instead, I decided we should make a wreath since snowflakes make me think of Christmas (even though it almost never snows here on Christmas). You can see our code on Github. Here’s a video of what it looked like while it was drawing:

The “turtle” drawing the snowflakes

Amortization Program

Recently, because of all the new, low interest rates, I had to calculate how much we have left on the mortgage vs what we’d save after a refi (plus costs). So I loaded up the program to do the calculations. Recently (I think on Python Bytes) I heard about the Decimal module which does more accurate math than the floating points that Python normally uses. It often isn’t a big deal, but over a series of calculations it can add up. It made a sub-$100 difference on the sum of how much interest I would pay over the life of the loan, but better to be more accurate than less accurate. I made a 4.1 release of my Amortization program as a result.

Today’s Programming: Flowers in Scratch 3

First of all, if you’re a Linux user and would like an offline version of Scratch 3, you can get it from https://scratux.org/. They make binaries for a few Linux distros plus an AppImage which works on any of them. On the Raspberry Pi Blog I saw that this week was about making art on the computer. So Stella and I went through the tutorial for making flowers in Scratch. Here’s what the code blocks looked like:

The code blocks used in Scrach to generate flowers

And here’s a video going through the various flower arrangements:

An animated gif of what each of the commands for flower generation did.

Stella had a blast entering numbers and seeing how that changed what we got out of the flower algorithm. I had a blast programming with her. Win/Win. Also, I learned that Scratch is much more complex than I’d realized. Of course, with the right programming blocks any Turing-Complete programming language should be able to implement anything, but I had no idea Scratch had this level of complexity within it. For example, the define blocks above are functions! This (seemingly) kiddy programming language has functions! So far, all three of my kids have mostly used it as an animation studio of sorts, so that’s all I’d seen it do. I have a lot less prejudice against Scratch and Microsoft’s MakeCode now and if they get more kids into programming, all the better!

So Long Katello Foreman!

Last year when I went to Red Hat Summit, I saw a lot of use of Satellite. I’d tried the 5.x series’ upstream Spacewalk and it didn’t quite work out for me. But this time I would try it out, gosh darnit! I mean, with the Katello plugin it would even include Pulp, which I’d been interested in trying out before because it could cache RPMs during an upgrade. So I’ve been messing with it here and there. However,I don’t use Puppet scripts (it’s like Chef or Ansible in principle) and I don’t have the need to provision new machines or VMs (especially when that’s already pretty easy with Cockpit and/or Virt-Manager). It’s already easier and works more consistently for me to keep track of whether my computers are up to date (and update them if not) with Cockpit. The RPM caching part of it was neat, but recently it stopped working consistently. Upgrades are VERY fragile and messing up on installing a plugin could bork the whole system. Also the necessary files – puppet, katello-agent, etc were always behind in providing packages for Fedora. Turns out it was a bunch of extra work and frustration just to keep track of my computers – and I was already doing that in Dokuwiki.

So, farewell Katello and Foreman. It was fun to play with your technology because I’m a big computer nerd and found that fun. But there’s only so much time for sys-admin-ing and so after I get everyone unsubscribed and back on their usual repos, I’ll be getting rid of you.

Upgrading my Katello-Foreman-Managed RPM build VM to Fedora 32

Because I have this VM registered to Katello (Foreman plugin) to receive updates (basically as a way of both keeping track of the computers and VMs on my network and also to have a GUI to pulp for caching RPMs), I had to deal with Katello-Agent. The latest RPM in the official Foreman/Katello repos is unfortunately for Fedora 29. That version of Fedora has been out of maintenance for a long time. Maybe Foreman (upstream for Satellite) is just used by most of their customers for RHEL sites that don’t have any Fedora nodes? So I did find this copr that provides updated versions: https://copr.fedorainfracloud.org/coprs/slaanesh/system-management/

To upgrade, first I had to go into Katello and remove it from the subscriptions. Then I had to run dnf repolist to get it to just look at the official Fedora and RPM Fusion repos. After that I ran the upgrade process. Once that concluded I was able to install Katello-agent and that brought in goferd and all the other packages I needed.

After that I just needed to subscribe it to Fedora 32 repos in Katello and everything was golden.

New Dishes I cooked in April 2020

When it came to new dishes, April was all about bread. First, I made a no-knead bread with America’s Test Kitchen’s recipe.

Almost No-Knead Bread

It came out OK. I actually tried it again the following day to try and get a darker crust. The funny thing is that this is one of the easiest breads to make and yet it’s the one I’ve had the worst results with. The crumb wasn’t as open as it was supposed to be and for all the time it took to proof it was pretty meh.

My second new dish was Malasadas.

Malasadas

These are a Portuguese doughnut that made its way to Hawaii with colonists/explorers and it’s now most famous in the US as being a Hawaiian thing. (Funny, in all my 5 or so trips to Oahu, I never heard about them). Well, since my family on my mother’s side comes from Portugal, I found out that my great-grandmother used to make these for my mom as an after-school treat. I can see why – they were incredibly tasty – better than any doughnut I’ve ever had.

Ubuntu 2020.04’s Server Install

As I mentioned in my k3s on Ubuntu 2020.04 post, I really thought that Ubuntu 2020.04’s server install was prety slick. I’m used to text-only server installs looking like this:

Arch Linux Installation Begins
Arch Linux Installation Begins

Here’s a step-by-step collection of screenshots and my thoughts on each step of Ubuntu 2020.04’s server install:

Language Selection

Just starting off, with the language selection, you can see this isn’t the usual ugly ncurses install. It looks like a beautiful matte black.

Updating installer during installation

Now, this right here is something I’ve never seen (that I can remember – maybe Arch or one of the other distros I looked at a long time ago do this?) and it’s something EVERY distro should do. For years now almost every distro has allowed you to install off the net instead of the CD/DVD/USB/ISO if you have the bandwidth. But this is the first time I’ve seen the ability to update the installer – important if the installer has some bugs (and I do remember in the past some Fedora installers having bugs and requiring me to get an updated ISO).

Keyboard configuration

After that great bit of innovation, we’re back into familiar territory here, setting up the keyboard.

Network connections

From there we move onto network connections. I’m just going to use this on the KVM NAT to test out server scenarios with Ubuntu. So I’ll just leave it on DHCP within that subnet. I skipped the proxy screen because I never use them and it’s pretty basic.

Configure archive mirror

Most folks would leave this the same, but it’s possible a University or large institution would have their own Ubuntu mirror to just grab the packages once, rather than for each computer that needs upgrades.

Guided Storage Configuration

Frankly this page looks exactly like most GUI installs, just with text selection instead of radio buttons and check-marks.

Partitions

Interestingly, I think this is one of those places where the larger the institution, most likely the simpler this would be, with users connecting to some sort of NAS or other complicated storage while keeping the front-end servers relatively simple.

Profile Setup

This was the screen I found the most interesting as it is the one that diverges the most from Red Hat/Centos/Fedora. First of all, there’s never a root password set. Second, under server’s name – it allows me to set what would be localhost, but doesn’t allow me to enter a localdomain.

SSH Setup

This is another radical departure and another one that I like. Well, I think the idea of a server without OpenSSH installed is very weird. BUT! I do like the ability to import an SSH identity for a potentially more secure login. Typically, in my experience, the Red Hat-based distros will have OpenSSH installed by default, but will not have the service enabled or started.

Popular Server Software

When I first installed Ubuntu Server 2020.04, I was already awed by the slick-looking install and updating the installer when I got to this screen. THIS. IS. A. GREAT. FEATURE. Now, maybe Ubuntu Server is more likely to be installed by your Average Joe who got snared into Linux via Plex (as is often mentioned on /r/homelab). One of these persons will show up to /r/homelab or /r/selfhosted once or twice a week to ask what else they should host on a Linux server. This list is a great example of what everyone else is running. The fact that they have sabnzbd on there makes me think this must be from raw numbers, not something Canonical is promoting. SO maybe the engineer installing RHEL doesn’t need this list because if you’re installing RHEL you’re doing it at work and you already know what you need. But I think Centos and Fedora should really consider adopting something like this during the package selection.

And that’s the Ubuntu 2020.04 server installer. A lot of the usual install prompts, but a few innovative ones that I want all the other distros to “steal” for their installers.

Checking out k3s and Ubuntu Server 2020.04 Part 2

Clearly there’s a lot I don’t get about Kubernetes and I didn’t install a GUI in that VM so I can’t use the dashboard (which can only be viewed at localhost – or so the instructions seem to indicate) So I decided to go back to basics and look at the Hello Minikube tutorial, but run it in my k3s VM.

kubectl create deployment hello-node --image=k8s.gcr.io/echoserver:1.4

So I think this is the first part of why I was having problems yesterday with the pod I created from Podman. A lot of the commands I saw online implied a deployment, but I hadn’t created one. This is evidenced by:

kubectl get deployments
NAME READY UP-TO-DATE AVAILABLE AGE
hello-node 1/1 1 1 3m25s

While pods showed:

kubectl get pods
NAME                        READY STATUS      RESTARTS AGE
miniflux                    0/2 CrashLoopBackOff 357    16h
hello-node-7bf657c596-2wc2j 1/1 Running            0    4m2s

So perhaps one of the things I need to do is figure out how to put a pod into a deployment. The next command they have you run is pretty useful:

kubectl get events
LAST SEEN TYPE REASON OBJECT MESSAGE
27m Normal Pulled pod/miniflux Successfully pulled image "docker.io/miniflux/miniflux:latest"
7m12s Warning BackOff pod/miniflux Back-off restarting failed container
5m27s Normal ScalingReplicaSet deployment/hello-node Scaled up replica set hello-node-7bf657c596 to 1
5m26s Normal SuccessfulCreate replicaset/hello-node-7bf657c596 Created pod: hello-node-7bf657c596-2wc2j
Normal Scheduled pod/hello-node-7bf657c596-2wc2j Successfully assigned default/hello-node-7bf657c596-2wc2j to k3s
5m21s Normal Pulling pod/hello-node-7bf657c596-2wc2j Pulling image "k8s.gcr.io/echoserver:1.4"
4m14s Normal Pulled pod/hello-node-7bf657c596-2wc2j Successfully pulled image "k8s.gcr.io/echoserver:1.4"
4m8s Normal Created pod/hello-node-7bf657c596-2wc2j Created container echoserver
4m7s Normal Started pod/hello-node-7bf657c596-2wc2j Started container echoserver
2m13s Warning BackOff pod/miniflux Back-off restarting failed container

Although on busy server I could see it getting overwhelming – hence OpenShift and other solutions to manage some of those things for you.

I’m still left uncertain of what I need to do to get things working. That said, for now, I think I’m just going to stick to Podman pods rather than the complexities of k3s. I don’t quite have the resources at the moment to run OpenShift, although perhaps I’ll give that another shot. (Last time I ran Minishift with OKD 3 it seemed to want to bring my computer to a crawl)

Checking out k3s and Ubuntu Server 2020.04 Part 1

As I’ve been working on learning server tech, I’ve gone from virtualization to Docker containers and now Podman containers and Podman pods. The pod in Podman comes from a view towards Kubernetes. I moved to Podman because of the cgroupsv2 issue in Fedora 31 and so I figured why not think about going all the way and checking out Kubernetes? Kubernetes is often stylized as k8s and a few months back I found k3s, a lightweight Kubernetes distro that’s meant to work on edge devices (including Raspberry Pis!). For some reason (that I don’t seem to find on the main k3s site), I got it in my head that it was better tailored to Ubuntu than Red Hat, so I decided to also take Ubuntu Server 2020.04 for a spin.

While one of my cloud servers runs Ubuntu, I didn’t have to install it. I just spun it up at my provider. So it’s been a long time since I did an Ubuntu installation. I think the newest ISO I had before 2020 was one of the 2016 Ubuntu ISOs. The server install is VERY slick. Slickest non-GUI installed I’ve ever seen. I’ll have to do a future post about it. I liked that it detected a more up to date installer during the install and offered to download and use THAT installer – negating any potential installer bugs. One of the most interesting parts of the install was when it asked if I wanted to install some of the more popular server apps. The list was quick eclectic and must be from the popularity tool because it even had Sanzbd and I can’t imagine Canonical pushing that on its own.

One thing I *am* used to from my Ubuntu cloud server that I loved seeing here is all the great information you get upon login. I wish CentOS or Red Hat would do something similar.

Ubuntu login information

I decided to go ahead with the k3s’ front page instructions under “this won’t take long”:

curl -sfL https://get.k3s.io | sh -
# Check for Ready node, takes maybe 30 seconds
k3s kubectl get node

After a bit, I didn’t time if it was 30 seconds, I got back:

NAME STATUS ROLES AGE VERSION
k3s Ready master 95s v1.18.2+k3s1

OK, looks like I have some Kubernetes ready to rock. I figured the easiest container program I’m currently running in Podman would be Miniflux. I already created a yaml file with:

 podman generate kube (name of pod) > (filename).yaml

That command generates the Podman equivalent of a docker-compose.yml file. You can use that yaml to recreate that pod on any other computer. The top of the file it generates says:

# Save the output of this file and use kubectl create -f to import
# it into Kubernetes.

So I’d like to try that and see what happens. Of course, first I have to recreate the same folder structure; in that yaml I’m using a folder to store the data so that it’s easier for me to make backups than if I had to mount the directory via podman commands.

After creating the folder, I transferred over the yaml file. Then I tried the kubectrl create -f command.

sudo kubectl create -f miniflux.yaml

I waited for the system to do something. Eventually I got back the feedback:

pod/miniflux created

Being new to true Kubernetes (as opposed to just Podman pods), I wasn’t sure what to do with this information. But I was happy that it hadn’t simply failed. Taking a look at the documentation for k8s, I learned about the command kubectl get. So I tried

kubectl get pods
NAME READY STATUS RESTARTS AGE
miniflux 0/2 CrashLoopBackOff 75 3h7m

Welp! That doesn’t look good.

Following along on the tutorial I typed

sudo kubectl describe pods

This gave a bunch of info that reminds me of a docker or podman info command. But the key to what was going on was at the end:

Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Started 33m (x34 over 3h8m) kubelet, k3s Started container minifluxgo
Normal Pulled 28m (x34 over 3h7m) kubelet, k3s Successfully pulled image "docker.io/library/postgres:latest"
Warning BackOff 13m (x739 over 3h5m) kubelet, k3s Back-off restarting failed container
Warning BackOff 3m40s (x772 over 3h5m) kubelet, k3s Back-off restarting failed container

In order to be able to see the pod’s logs:

kubectl logs -p miniflux -c minifluxgo
kubectl logs -p miniflux -c minifluxdb

That’s because the pod had more than one container. Turns out the issue was with the database. It’s complaining about being started as the root user.

(Sidenote: awesomely Ubuntu Server Edition comes with Tmux pre-installed!)

Strangely there doesn’t seem to be a way to restart a pod. The consensus seems to be that you use:

kubectl scale deployment <> --replicas=0 -n service 

So I will try that. Apparently that doesn’t work when you create it from podman yamls.

Eventually I decided to try and get underneath k3s. So the replacement for docker or podman in k3s is crictl.

crictl ps

Showed my containers. I thought I had maybe fixed what was wrong with the database. So I tried

ctrctl start (and the contaierid)

Apparently it doesn’t want to do that because the container is in an exited status. This whole thing is so counter-intuitive coming from Docker/Podman land. Then again, when I went to try again, it had switched container IDs because k8s had tried to restart it. So it truly is ephemeral.

Well, that’s all I could figure out over the course of a few hours. I’ll be back with a part 2 if I figure this out.

Fedora Silverblue as an HTPC Part 3

Yesterday I mentioned some issues with my Ortek MCE VRC-1100 remote and certain buttons not working. Figured out that in addition to removing the XF…. entries in dconf, also have to remove them in gsettings. Specifically, I had to use the commands:

gsettings set org.gnome.settings-daemon.plugins.media-keys stop-static [”]

gsettings set org.gnome.settings-daemon.plugins.media-keys play-static [”]

gsettings set org.gnome.settings-daemon.plugins.media-keys pause-static [”]

After that, everything was working as it should. So far no negatives to using Fedora Silverblue as our HTPC. We’ll see if that changes as I try to get Lutris to launch some Wine games.