A user of my Extra Life Donation Tracker program discovered that I had introduced a regression for brand new users who didn’t have a persistent setting. I thought about it overnight and it was exactly what I thought it would be – when I switched away from threading for the GUI, I forgot to add a way to tell participant.py to reload its settings values. I also decided to take a page from a programming podcast I heard recently and change the settings GUI to only have a “Save” button instead of a persistent save button AND a Save button. Especially when I wanted most people to hit persistent save, and that’s not what they’d most likely do by default. So I ended up making 5.2 release to fix both of those issues.
For the past week I’ve been wracking my brain trying to figure out how to get Scarlett’s Sibling chooser to actually be random in its selections. Turns out that it was very predictably random because of the fact that, unlike a real computer, it doesn’t have a way to create a true random seed. (On Linux computers a few years ago it was an explicit step you’d see as part of the install) If I were coding in Arduino or Python, I’d be able to create a seed for the program via reading the light sensor. But that’s not possible in MakeCode. SO I came up with another way to introduce randomness. I have the random function running in a forever loop. Since that’s going to be running “infinitely” fast, it’s very unlikely that the user is going to be able to always hit the same sequence and it’ll be random enough. In my testing it was very random-seeming, so that works out well. Here’s the new code:
Spent the last few days finalizing the 5.0 release of my Extra Life Donation Tracker and then pushed ahead to get version 5.1 out. Here’s the PyPi page if you want to use it for your Extra Life live streams.
I got back to my BBQ Themostat project and did some minor programming while trying to figure out how to run a computer case fan. So far I’m still working on the wiring aspect of this part of the project, but some folks on reddit did point out that part of my problem was getting a pair of my BJT connections mixed up.
Since I’ve been making a lot of great progress programming with the kids in Scratch, I bought some Circuit Playground Expresses to program with the kids. The Circuit Playground Express can program in Arduino’s C dialect, CircuitPython, or Microsoft MakeCode, which uses blocks like Scratch. Today Scarlett and I made our first useful bit of code, a digital spinner we called The Sibling Chooser. Here’s the code, and you can see that it is indeed like Scratch:
After we tested the code in the website’s simulator, Scarlett got to work on the spinner background.
And here’s the finished version with the page where we figured out the logic for the program:
Here’s a video of it in use:
Afterwards, I remembered that I had a JST battery holder that I’d bought for use with an Arduino project. So this is how it looks with the battery holder – the most important part being that it no longer blocks anyone’s name:
It was neat being able to come up with a useful project and Scarlett was able to understand the code enough to help me find an error when I was trying to figure out the logic. I look forward to more electronics projects with the kids.
As I mentioned before, I got a Raspberry Pi Zero W to replace my Arduino MKR WIFI 1010 and ENV board in the bathroom. My Pimoroni Enviro Mini pHAT (or bonnet, as Adafruit calls them) finally arrived a few days ago, so I setup a git repository for my code. The Pimoroni git Enviro+/Enviro Mini repository has a one-line configuration, but I’d rather do things manually so I know what I’m doing and also so I can set up a proper requirements.txt in my Python venv.
sudo apt install python3 venv
In my development directory: python3 -m venv .
activate the virtual environment: source bin/activate
pip install enviroplus
Enable I2C and SPI. Either use: sudo raspi-config interactively or their commands. I went for interactive.
I’m not using the Enviro+ and air quality monitor, so I don’t need to enable the serial interface.
install a few more dependencies: pip install numpy smbus setuptools Pillow Rpi.GPIO
That should give you a system that’s ready to work with the pHAT. So I cloned Pimoroni’s repo and ran the all-in-one-enviro-mini.py in the virtual environment. Turns out if you install numpy via pip and not apt, you’re missing a dependency. So you EITHER need to run
sudo apt install python3-numpy OR sudo apt install libatlas-base-dev
I did the latter.
You will also need to run
sudo apt install python3-pil libopenjp2-7-dev
to make sure you have some dependencies on the system.
The all-in-one-enviro-mini script produces a graph on the screen. To change the graphs, cover the light sensor. It’s the demo that’s on the Adafruit site. It’s not quite what I wanted to base things on. But at least I knew it was now working. Note: Control-C will quite the program, but will not turn off the screen. It just stops updating.
After taking a look at the various examples they have on there, it looks like I’m going to want to base my code on weather-and-light.py. My goals will be to:
add MQTT via the Paho library so it can communicate with Home Assistant (or anything else I hook up to MQTT)
set it up to turn off the screen at night when we’re asleep (no need to burn out the LEDs if no one’s going to be looking)
I noticed that something had gone screwy with the Raspberry Pi 1B in the garage that was monitoring the garage door. I restarted it and discovered that last time I was coding and working on making it more robust if it had a temporary lapse in WiFi (so it wouldn’t crash), I’d created a little error. Fixing that error led me to realize that my new code for robustness had introduced an unfortunate artifact in that it would pass a status of “unchanged” to MQTT. So I fixed that. Code’s now in a good place. I just need to add a few more config options to make it more usable for others who aren’t me. Then I’ll make another release.
Now, that is a lot more useful than even Chrome or Firefox with lots of tabs open where you can’t text anymore because it has been compressed beyond useful. So far I’m really liking Vivaldi. I also have had occasion to download something with the browser and I like the download tab better than what’s in either Chrome or Firefox.
On a whim yesterday I installed Vivaldi on my Linux (Fedora 32) Laptop. It’s a refurbished Dell I got a few years ago. Firefox takes ages to start up on that laptop, but Vivaldi is up in a snap! If that behavior keeps up, I’ll definitely be making Vivaldi my default on the laptop.
Once again I worked on some Scratch projects with the kids. This time it involved a sports theme. I let the kids choose which games they wanted to create and Sam chose to create the game where Scratch the Cat goes Skiing.
Having done this for a few weeks now, it wasn’t too much trouble to get things working and do a mod or two for Sam. The video below contains the code and a short video of Sam playing the game.
Stella chose to create the Penguin Soccer game. This was the first two-player game we made. We had a weird situation where no matter what I did with the code, the ball would float around. Eventually I had to just reload the original file and start over. It’s possible that Stella hit a button while I was trying to read the instructions and altered “gravity” or something. At any rate, it was pretty easy to get going and make her mods as well. I continue to be amazed at what can be created in Scratch.
Today I upgrade my main Linux computer to Fedora 32, giving me access to Python 3.8. I tried to upgrade my virtual environment and it’s possible I did something wrong, but I had to reinstall all of my packages.It’s not the end of the world, but a little annoying. I guess it’s a good thing I had set up a good requirements.txt file. Last week I did some work on my Extra Life Donation Tracker – working towards my version 5.0, but I forgot to document it on the blog.
It’s been about a month since Fedora 32 was released, so I decided to try and upgrade Supermario to Fedora 32. First I had to disable the dropbox repo since they don’t have a Fedora 32 binary yet. Other conflicts included:
bat in module
gimp in module
meson in module
ninja in module
pythnon3-pytest-testmon (doesn’t belong in a distupgrade repo)
python2-beautifulsoup neds python2-lxml
The python ones are no-brainer to me. I use virtual environments now so I don’t care about the system libraries. I can get rid of those.
At first when I was writing this, I was going to lament that modules were still causing issues with upgrades. But it turns out that it was telling that info, but once I fixed the python issues, it auto-disabled the modules. So, good job Fedora upgrade QA team!
I did a quick review of what it was going to delete as part of the upgrade. Apparently it’s finally time for PyQT4 to go away. A bunch of other Python2 packages bit the dust. The only concerning issue was that a lot of gstreamer packages were being blown away. Because the list of packages to upgrade was so long, I couldn’t see if they were being replaced by a renamed package or all-encompassing package. So we’ll see if multimedia ends up broken on my system. In my life it’s the only part of Linux that can still be occasionally frustrating – especially since I like to work with video and need all the codecs to be working.
After a reboot, at the very least my music collection plays. I’ll have to see over time what, if anything, broke.
The next project I wanted to work on was to see if maybe my environment monitoring might be slightly more reliable with a Raspberry Pi than with an Arduino. So I wanted to do some comparisons. For my bathroom IoT project, I am using:
That’s a total of $66.4 before taxes and shipping. In a way, that’s pretty incredible because the Raspberry Pi kicks the Arduino’s pants off when it comes to computing power specs. Also, the Pimoroni Env Hat has a screen to display whatever you want – on the product page it shows a graph of the data being measured. It also has a button. It’s also basically the same size:
In fact, this whole endeavor has really made me think a lot about how to decide which boards to use for various projects. On the one hand, the Raspberry Pi (any version or revision) is going to be easiest to program and debug. On the programming end, you can use pretty much any programming language you are familiar with – from Python to Rust to Go. On the debugging end, you can plug it into a monitor via HDMI. You can SSH into it (as long as it’s connected to Wifi). If you’ve designed your project to do so, you can check log files. It’s got a lot more RAM and it’s got as much storage space as you want to add via an SD card. Arduino, on the other hand, needs to be programmed in its C variant. (Although Adafruit’s boards – and many others – can run Circuit Python) For debugging, I had to add on a breadboard with some LEDs so I could tell what was going on. Because sometimes plugging it into a laptop to debug is either impractical (vs using SSH with a Raspberry Pi) or sometimes changes enough about what’s going on to mask the issue. And, as you can see above, when you’re looking at a Raspberry Pi Zero or Zero W, you’re looking at the same footprint (that is to say that they take up the same space in your project)
Of course, there are reasons people use Arduinos and Adafruit Feathers (and other boards). For starters, while debugging is easier with a Raspberry Pi (it’s running a whole freakin’ Linux disto), it’s also harder with a Raspberry Pi (because it’s running a whole freakin’ Linux distro!). To give an example, I bought a Raspberry Pi 4 for my daughter to use as a jukebox running Mopidy. After doing an update of the system, something changed in the libraries and the way they handled outputs that made it stop working. It was a week of work to try and figure out what went wrong. (And I wouldn’t have figured it out without help on the Mopidy forums) By contrast, an Arduino is (to simplify a bit) ONLY running your code. With the exception of firmware upgrades for various chips, there’s nothing to update. If you code works now, it will always work unless you change something about your code. Speaking of always working – that brings me to one of the reasons I wanted to write this post; sometimes the Raspberry Pi can be quite fragile. I didn’t have a 5V 2.5A power supply available (the minimum recommended by the Raspberry Pi foundation). So I used a 5V 2A charger. When I did a shutdown, it needed a little more power and so the card became corrupted and I had to reflash it. At least, I was able to switch to Raspbian Lite which dumps you straight to the console. No need to waste space and RAM on an OS desktop if it’s going to just be measuring environmental data. Back to reasons you might use an Arduino, Feather, etc – they’re generally referred to as prototype boards. Because they’re so simple, if you end up using them to build something useful, you can then use that to get a single board printed with exactly the chips and connections you’re using from the Arduino board – and that’s a pretty powerful preposition. Also, generally speaking, a power outage is not going to screw up your program as it can cause corruption on Raspberry Pi from not shutting down correctly.
Going forward, I’m not 100% sure what I’m going to choose, but I imagine it will depend on a variety of features. Do I need it to survive power failures? Is it a one-off or something I might want to duplicate? Do I want to challenge myself with C(ish) code or just do something easy with Python? How much money do I have for my build? And do I care about needing to update libraries to keep my network safe?
I don’t get as many comments on my posts nowadays – partly because the internet culture has shifted to commenting on Reddit, Facebook, Hackaday, etc, but if you are a fellow maker, I’d love to hear your decision-making process when trying to select a board for a project.
For the first browser I wanted to check out on Windows, I decided to check out Vivaldi. My thought process is that I’m most likely to end up with Brave, so better to save that one for last. But as I went through the first-run process in Vivaldi and saw the nice polish the browser seems to have, it really started tugging on me, saying, “Are you sure you wouldn’t want to just stay with Vivaldi?” For this first post, I’d like to cover the first-run process and then a little video poking around the interface. This’ll be followed up in a while with any impressions I’ve come away from my usage of Vivaldi on Windows.
During import settings, it appears to just list every major browser (and Vivaldi) regardless of whether you have it on the system. (Or maybe I’ve installed Chromium in the past and the config files are in the Windows equivalent of my home directory?)
I found this to be a very interest settings page. When they ask you about blocking trackers and ads, who the eff chooses not to even block trackers?!? Of course, I understand not blocking ads so that sites can get paid. But I don’t understand who *wants* to get tracked (other than researchers) I’m going to block everything and then see if sites later ask me to whitelist them – that would give me some insight into what the whitelist tool looks like.
When I installed Vivaldi on my Linux computer I decided to use tabs on the side. On the one hand, one of my Windows monitors is a 2k monitor. So it’s OK to have tabs on top – I would imagine most sites are expecting users to have 1080 or lower resolution. On the other hand, it IS widescreen and many websites don’t actually take up lots of horizontal space. So why not go with a left or right tab structure (especially since I’m doing my Windows taskbar on the bottom). Since they already have a bar on the left (which I will explore in the video below), I decided to put my tabs on the right to not end up clicking on their other features by accident.
Speaking of those features, I know it might be kind of passe at this point to have bookmarks – maybe geeks/nerds just leave tabs open forever. I still use bookmarks here and there. So I like that it’s a button on the left in Vivaldi rather than being somewhat hidden in Chrome.
I think it’s neat at the end that they offer up a link to video tutorials. Sometimes videos work better than words as you’ll see below. .
Interesting initial speed dial. These plus Bing as the default search are all paid positions. That is to say, Amazon, Walmart, and the others are all paying Vivaldi some money to be there when you first install it and until you accumulate some automated speed dial entries.
The Notes Manager feature could make this an excellent browser for students in K-12 and College classes. Also the note-taking screenshot gives an example of how many websites are not formatted to take advantage of a widescreen view; thus it’s fine to have the tabs on the side.
Now, here’s a video where I explore the various buttons all over the UI:
Notes after using for just a few days:
So far I have had one incidence where listing the tabs on the side (instead of the top) didn’t work well. I had to reduce the tab bar on the right to the skinniest size for the YouTube upload to show the entire UI when taking up half of a 1080 screen. Why did I have it squashed to just half the screen? So that I could have the files I was dragging in to upload on the left side.
Having just learned of the existence of speeddial groups, I am already jealous that I don’t have speed dial groups at work. IT is very strict about which browsers we can have installed.
Well, on Linux I bounced back and forth between Firefox and Chrome, depending on which one was getting better performance. At this point, for what I do, Firefox is the winner for me. I use it on my laptop and desktop and it gets things done without getting in my way. I don’t necessarily have the most modern GUI setup because it tends to keep your GUI settings as you upgrade. This is what it looks like:
I’ve set a dark color scheme to match my dark scheme on KDE.
I don’t use Chromium, but I have it installed. I use Falkon sometimes to segregate browsing and reduce cross-tab scripting malware. For Chrome, it actually now takes FOREVER to start up on my computer (relative to Firefox, Falkon, or non-browsers) so I rarely use it.
Over on Windows I use Chrome. I think for a little bit Firefox had fallen behind in terms of performance on Windows and I haven’t really seen a reason to change my defaults.
qutebrowser – “everyone” loves Vim. Nearly every editor has a Vim-compatibility mode. This browser takes that to the extreme. It’s a keyboard-based driver (perhaps it should be the default on the Rat Poison Window Manager) that uses a lot of Vim shortcuts – like hjkl for moving around on a page and : (colon) to give various commands to the browser. On the negative side, you have to completely relearn all your browser keyboard shortcuts. On the positive side, you have almost no need for any keyboard usage (there’s even a shortcut that gives you a shortcut letter to a link so you don’t have to go link-by-link as I’ve had to do in the past with a commandline browser) so it could be extra great for RSI sufferers who are trying to limit their mouse usage. There’s also probably a pretty bit productivity gain once you get used to the shortcuts because your hands never have to leave the keyboard.
Vivaldi – If you go through my old browser posts you’ll see that there was a chunk of time where I really loved Opera and ran it as my main browser on Windows. Vivaldi is the spiritual successor to Opera after the company went from being Scandinavian-owned to Chinese-owned. A bunch of the hackers went off to start Vivaldi. This, like Brave, is another Chromium-based browser. But they bring Opera-level customization and features to the browser to transform it into something unrecognizable for sharing the same code under the hood as Chrome. Of course, this makes it similar to qutebrowser in the sense that you really have to go all-in to truly comprehend and use all the features. (Just go to that front page and scroll through the features. As of today there are 15 features touted on the home page) This is why, like the original Opera, those who love Vivaldi LOVE Vivaldi. Once you get to that level, it’s hard going back to a plain browser. I’ve played with it a little, but haven’t really given it a proper spin. (Partially because when I first installed it onto Fedora it was still in Beta). I could see this possibly becoming my daily driver on Windows and maybe on Linux.
Microsoft Edge?!? – obviously this would just be on Windows. As you can see in the below image, no Edge for Linux. (yet?) So, since Firefox I never liked any iteration of Internet Explorer even as they tried and tried to make it not be so crufty. When they first introduced Edge, I tried it a couple times because it was the default on a computer. I didn’t care for it. But now it’s based on Chromium AND it seems to be getting rave reviews online. Could that be paid content? Perhaps. Nothing surprises me on the web anymore. But perhaps it’s worth checking out…
So, there we go, a trio of browsers I’m interested in and why I believe the browser space is getting to be a lot of fun even if we’re mostly down to either a Chromium or a Firefox (is their engine still Gecko? Or would I call the Chromium equivalent Aurora?) backend. (Note: qutebrowser uses QtWebEngine – not sure if that shares any DNA with Chromium) In some ways, browsers have become like Linux distros. They’ve all got the same underlying packages – gcc, KDE, etc – but what they bring to the table is the UI and how polished the final product is. It’s not a perfect metaphor because Google controls both the package equivalent (Chromium) and the distro equivalent (Chrome), but there’s still room for improvement without the stagnation we saw when Internet Explorer had a 90% market share. (At least not yet. We’ll see what happens with Google if Brave, Ungoogled Chrome, and other potentially revenue-sapping browsers catch on)
On Windows I could easily see myself ending up on Brave. I’m already using Chrome, so why not a Chrome that’s less tracking? Also, my browser on Windows is of minimal importance these days since my Windows computer is just my video game machine / video game streaming machine. 99% of my Windows web browsing is to Youtube to upload my “Let’s Play” videos. Or maybe I could once again go the Opera-route and do Vivaldi. Maybe even the unthinkable (for 10 years-ago-Eric) and go back to Microsoft with Edge.
On Linux I’m probably more likely to stay with Firefox. But I could maybe see myself being tempted by qutebrowser or Vivaldi. Brave is a distant possibility, but maybe it will convince me.
Because last week was busy with house projects, this week I continued the ocean/water-themed programming from the Raspberry Pi Foundation. The first project was a game I made with Stella (her first computer game creation), a boat race in Scratch. While we mostly stuck to the tutorial, we did partake in the challenges, including adding a shark and figuring how to have background music (which Stella chose on her own). That turned out to be really tough as my attempts to figure out how to use the loops led to stuttering. Eventually I figured it out and you can see what I did, the rest of the code, and how the game plays by watching the video below:
Of course, it turns out that if I had done Scarlett’s project first, I would have learned how to play music in the background of a scratch game. Scarlett’s Scratch project was to make a synchronized swim for Scratch the Cat. Scarlett’s customization was to chose the music as well as the ability to choose how many cats were in the pool for the swim. Check out the video below to see the code as well as what the final project looked like:
Starting off with this code from the Raspberry Pi foundation, Sam made his first ever video game. I then modified it so the shark would close his mouth when he eats the fish and added a sound at the end when a score of 5 was reached. Here’s a video of Sam playing his game:
For Python I continued to work on writing unit tests for my Extra Life Donation Tracker. Doing so helped me find a few bugs and a few functions that I was able to combine to reduce the code complexity. Overall, this has been a very productive phase for the project, even if it has been very frustrating figuring out how to mock out certain functions like those reaching out to the net or those writing to disk. Depending on what I can do with some help that someone offered me on reddit, I will either go back to the extralife_io.py file to try and finish that one up or I may declare unit testing completed for now and tackle that later when I’ve learned even more about how Python works and how to test it.
I’ve mostly been working on version 5.0 of my Extra Life Donation Tracker. Since I adopted the Semantic Versioning principle for the project, an API change means a major version change. I’ve been taking everything I’ve learned about Python programming from 2019 and 2020 and tried to make my code both more Pythonic and more sophisticated. I’m also trying to move towards 100% code coverage. That is to say, I’m looking to try and make sure every line of code is covered by a test. While 100% test coverage doesn’t guarantee perfect code (after all, the test itself could be flawed or I might not be considering corner cases), striving for it usually has a few benefits:
by thinking about how you will test the code, you have to think about how the code works. This sometimes reveals errors.
thinking about what you will test sometimes leads you to consider corner cases you hadn’t considered were there
well-written tests help when dealing with bug fixes, new features, and refactors by helping to prevent/reduce introduction of new bugs
Getting to 100% coverage has been pretty hard and I’m currently struggling with how to force my code to cause errors that will allow me to make sure I’m catching errors correctly. I expect getting to 100% coverage will take the rest of today and probably go into next week. After that, I’ll be adding MyPy coverage (trying to make sure I’ve got type annotations throughout the code), using the “rich” module to generate better user output on the commandline, some tidying up, and then providing some new output for team-focused streamers.