Stratis or BTRFS?

It’s been a while since btrfs was first introduced to me via a Fedora version that had it as the default filesystem. At the time, it was especially brittle when it came to power outages. I ended up losing a system to one such use case. But a few years ago, I started using btrfs on my home directory. And even developed a program to manage snapshots. My two favorite features of btrfs are that Copy on Write (COW) allows me to make snapshots that only take up space when the file that was snapshot changes and the ability to dynamically set up and grow RAID levels. I was able to use this recently to get my photo hard drive on RAID1 without having to have an extra hard drive (because most RAID solutions destroy what’s on the drive).

However, btrfs has been plagued with some important issues – for example RAID5/6 is unstable, not recommended, and after many years still hasn’t solved the write hole. (Something the very similar ZFS has had solved for years) Look online and you’ll find scores of tales of people who have suffered unrecoverable data loss from btrfs.

A few years ago Red Hat deprecated btrfs on RHEL6. That makes sense given the long support times of the RHEL releases. The team at Red Hat has to backport kernel fixes and that gets complicated as time goes by. btrfs has grown by leaps and bounds since RHEL6. But a couple days ago (when I write this – 10 days before the blog post is going to appear), Red Hat announced it was getting deprecated on RHEL7. There was lots of speculation on the net and someone who used to hack on btrfs for RHEL mentioned that since he left, no one at Red Hat worked on it. Suse is the distro that employes btrfs hackers at this point. Then, yesterday, Stratis was announced. From the Phoronix article I read about Stratis in:

First a quote from the announcement of Stratis:

Stratis is a new tool that meets the needs of Red Hat Enterprise Linux (RHEL) users calling for an easily configured, tightly integrated solution for storage that works within the existing Red Hat storage management stack. To achieve this, Stratis prioritizes a straightforward command-line experience, a rich API, and a fully automated, externally-opaque approach to storage management. It builds upon elements of the existing storage stack as much as possible, to enable delivery within 1-2 years. Specifically, Stratis initially plans to use device-mapper and the XFS filesystem. Extending or building on SSM 2.1.1 or LVM 2.1.2 was carefully considered. SSM did not meet the design requirements, but building upon LVM may be possible with some development effort.

From the Wikipage describing that it’s going to land in Fedora 28:

a local storage system akin to Btrfs, ZFS, and LVM. Its goal is to enable easier setup and management of disks and SSDs, as well as enabling the use of advanced storage features — such as thin provisioning, snapshots, integrity, and a cache tier — without requiring expert-level storage administration knowledge. Furthermore, Stratis includes monitoring and repair capabilities, and a programmatic API, for better integration with higher levels of system management software.

Then from the author of the Phoronix article:

For Stratis 1.0 they hope to support snapshot management, file-system maintenance, and more. With Stratis 2.0 is where they plan to deal with RAID, write-through caching, quotas, etc. With Stratis 3.0 is where it should get interesting as they hope for “rough ZFS feature parity” and support send/receive, integrity checking, RAID scrubbing, compression, encryption, deduplication, and more. Only in the first half of 2018 is when they expect to reach Stratis 1.0. No word on when they anticipate getting to Stratis 3.0 with ZFS feature parity.

Interesting. I led me on a path of exploration of LVM and other tech. First of all, I don’t imagine btrfs is going to sit still, unworked on, while this happens. Maybe it finally reaches its stability goals. Maybe the threat of Stratis attracts more hackers to btrfs. Or, maybe Stratis catches up with, and surpasses, btrfs. I think if they can make the dynamic RAID work, and can get stability up to ZFS levels, I could move over to Stratis. If not, I’m still thinking about LVM and XFS or ext4 for my home-built NAS rather than btrfs (or together with btrfs if it doesn’t get too complex for snapshotting purposes) because that would (potentially) let me grow directories indefinitely in a way that works as my backup needs grow. This will require more knowledge and planning, though. I’ll keep documenting my research here.

btrfs scrub complete

This was the status at the end of the scrub:

[root@supermario ~]# /usr/sbin/btrfs scrub start -Bd /media/Photos/
scrub device /dev/sdd1 (id 1) done
 scrub started at Tue Mar 21 17:18:13 2017 and finished after 05:49:29
 total bytes scrubbed: 2.31TiB with 0 errors
scrub device /dev/sda1 (id 2) done
 scrub started at Tue Mar 21 17:18:13 2017 and finished after 05:20:56
 total bytes scrubbed: 2.31TiB with 0 errors

I’m a bit perplexed at this information. Since this is a RAID1, I would expect it to be comparing info between disks – is this not so? If not, why? Because I would have expected both disks to end at the same time. Also, interesting to note that the 1TB/hr stopped being the case at some point.

Speed of btrfs scrub

Here’s the output of the status command:

[root@supermario ~]# btrfs scrub status /media/Photos/
scrub status for 27cc1330-c4e3-404f-98f6-f23becec76b5
 scrub started at Tue Mar 21 17:18:13 2017, running for 01:05:38
 total bytes scrubbed: 1.00TiB with 0 errors

So on Fedora 25 with an AMD-8323 (8 core, no hyperthreading) and 24GB of RAM with this hard drive and its 3TB brother in RAID1 , it takes about an hour per Terabyte to do a scrub. (Which seems about equal to what a coworker told me his system takes to do a zfs scrub – 40ish hours for about 40ish TB)

Finally have btrfs setup in RAID1

A little under 3 years ago, I started exploring btrfs for its ability to help me limit data loss. Since then I’ve implemented a snapshot script to take advantage of the Copy-on-Write features of btrfs. But I hadn’t yet had the funds and the PC case space to do RAID1. I finally was able to implement it for my photography hard drive. This means that, together with regular scrubs, I should have a near miniscule chance of bit rot ruining any photos it hasn’t already corrupted.

Here’s a documentation of some commands and how I got the drives into RAID1:

 

Before RAID:

# btrfs fi df -h /media/Photos
Data, single: total=2.31TiB, used=2.31TiB
System, DUP: total=8.00MiB, used=272.00KiB
System, single: total=4.00MiB, used=0.00B
Metadata, DUP: total=3.50GiB, used=2.68GiB
Metadata, single: total=8.00MiB, used=0.00B
GlobalReserve, single: total=512.00MiB, used=0.00B

# btrfs fi usage /media/Photos
Overall:
    Device size:                   2.73TiB
    Device allocated:              2.32TiB
    Device unallocated:          423.48GiB
    Device missing:                  0.00B
    Used:                          2.31TiB
    Free (estimated):            425.29GiB      (min: 213.55GiB)
    Data ratio:                       1.00
    Metadata ratio:                   2.00
    Global reserve:              512.00MiB      (used: 5.64MiB)

Data,single: Size:2.31TiB, Used:2.31TiB
   /dev/sdd1       2.31TiB

Metadata,single: Size:8.00MiB, Used:0.00B
   /dev/sdd1       8.00MiB

Metadata,DUP: Size:3.50GiB, Used:2.68GiB
   /dev/sdd1       7.00GiB

System,single: Size:4.00MiB, Used:0.00B
   /dev/sdd1       4.00MiB

System,DUP: Size:8.00MiB, Used:272.00KiB
   /dev/sdd1      16.00MiB

Unallocated:
   /dev/sdd1     423.48GiB

   
[root@supermario ~]# btrfs device add /dev/sda1 /media/Photos/
/dev/sda1 appears to contain an existing filesystem (btrfs).
ERROR: use the -f option to force overwrite of /dev/sda1
[root@supermario ~]# btrfs device add /dev/sda1 /media/Photos/ -f

[root@supermario ~]# btrfs fi usage /media/Photos
Overall:
    Device size:                   6.37TiB
    Device allocated:              2.32TiB
    Device unallocated:            4.05TiB
    Device missing:                  0.00B
    Used:                          2.31TiB
    Free (estimated):              4.05TiB      (min: 2.03TiB)
    Data ratio:                       1.00
    Metadata ratio:                   2.00
    Global reserve:              512.00MiB      (used: 0.00B)

Data,single: Size:2.31TiB, Used:2.31TiB
   /dev/sdd1       2.31TiB

Metadata,single: Size:8.00MiB, Used:0.00B
   /dev/sdd1       8.00MiB

Metadata,DUP: Size:3.50GiB, Used:2.68GiB
   /dev/sdd1       7.00GiB

System,single: Size:4.00MiB, Used:0.00B
   /dev/sdd1       4.00MiB

System,DUP: Size:8.00MiB, Used:272.00KiB
   /dev/sdd1      16.00MiB

Unallocated:
   /dev/sda1       3.64TiB
   /dev/sdd1     423.48GiB


[root@supermario ~]# btrfs balance start -dconvert=raid1 -mconvert=raid1 /media/Photos/

Done, had to relocate 2374 out of 2374 chunks

Post-RAID:

[root@supermario ~]# btrfs fi usage /media/Photos
Overall:
    Device size:                   6.37TiB
    Device allocated:              4.63TiB
    Device unallocated:            1.73TiB
    Device missing:                  0.00B
    Used:                          4.62TiB
    Free (estimated):            891.01GiB      (min: 891.01GiB)
    Data ratio:                       2.00
    Metadata ratio:                   2.00
    Global reserve:              512.00MiB      (used: 0.00B)

Data,RAID1: Size:2.31TiB, Used:2.31TiB
   /dev/sda1       2.31TiB
   /dev/sdd1       2.31TiB

Metadata,RAID1: Size:7.00GiB, Used:2.56GiB
   /dev/sda1       7.00GiB
   /dev/sdd1       7.00GiB

System,RAID1: Size:64.00MiB, Used:368.00KiB
   /dev/sda1      64.00MiB
   /dev/sdd1      64.00MiB

Unallocated:
   /dev/sda1       1.32TiB
   /dev/sdd1     422.46GiB
   
   
[root@supermario ~]# btrfs fi df -h /media/Photos
Data, RAID1: total=2.31TiB, used=2.31TiB
System, RAID1: total=64.00MiB, used=368.00KiB
Metadata, RAID1: total=7.00GiB, used=2.56GiB
GlobalReserve, single: total=512.00MiB, used=0.00B

And here’s the status of my first scub to test out the commands:

[root@supermario ~]# btrfs scrub status /media/Photos/
scrub status for 27cc1330-c4e3-404f-98f6-f23becec76b5
 scrub started at Tue Mar 21 17:18:13 2017, running for 00:09:10
 total bytes scrubbed: 145.57GiB with 0 errors

LXC Project Part 3: Starting and logging into my first container

Continuing my LXC project, let’s list the installed containers:

lxc-ls

That just shows the name of the container – lemmy. For completion’s sake, I’m going to start it as a daemon in the background rather than being sent straight into the console:

lxc-start -n lemmy -d

As per usual Linux SOP, it produced no output. Now to jump in:

lxc-console -n lemmy

That told me I was connected to tty1, but did not present a login. Quitting out via Ctrl-a q let me go back to the VM’s tty, but trying again did not get me login. There’s some weird issue that doesn’t allow it to work, however, this did:

lxc-attach -n lemmy

I’m not 100% sure why it works and console doesn’t, but there seems to be discussion about systemd causing issues. At any rate, the only limitation of lxc-attach is that the user doing it has to also exist on the container. However, given that these are server boxes, root is fine and so it works.

Unfortunately, networking does not work. That’ll be for next time.

LXC Project Part 2: Setting up LXC

I’m continuing on from yesterday’s post to get the VM ready to host LXC. I’m starting with Centos 7 so the first thing I had to do was enable the epel repos:

yum install epel-release

Then, according to the guide I was following, I had to also install these package:

 yum install debootstrap perl libvirt

That installed a bunch of stuff. I also get that they’re trying to break out what they’re doing, but they probably could have installed both that and the LXC stuff below in one blow:

yum install lxc lxc-templates

Then start the services we just installed:

systemctl start lxc.service
systemctl start libvirtd

Then, a good thing to do to make sure everything’s working correctly is to run the following:

lxc-checkconfig

If you get all “enabled” (in Centos 7 it’s also green), then you’re in good shape. You can see which templates you have installed with the following command:

ls -alh /usr/share/lxc/templates/

When I did that, I had alpine, altlinux, busybox, centos, cirros, debian, fedora, gentoo, openmandriva, opensuse, oracle, ubuntu, and ubuntu-cloud.

As my last act of this post, I’ll create my first container:

lxc-create -n lemmy -t centos

This is going to run Cockpit to keep an eye on servers on my network. After running that command, it looked like a yum or dnf install was happening. Then it did some more stuff and then told me what the root password was. It also told me how to change it without having to start the container. So I did that. Next time…starting and running a container.

LXC Project Part 1: Bridging the Connection

As I mentioned before, I’m looking at Linux Containers (LXC) to have a higher density virtualization. To get ready for that, I had to create a network bridge to allow the containers to be accessible on the network.

First I installed bridge-utils:

yum install bridge-utils -y

After that, I had to create the network script:

vi /etc/sysconfig/network-scripts/ifcfg-virbr0

In there I placed:

DEVICE="virbr0"
BOOTPROTO="static"
IPADDR="192.168.1.35" #IP address of the VM
NETMASK="255.255.255.0"
GATEWAY="192.168.1.1" 
DNS1="192.168.1.7"
ONBOOT="yes"
TYPE="Bridge"

Then, since my ethernet on this machine is eth0

vi /etc/sysconfig/network-scripts/ifcfg-eth0

and added

BRIDGE="virbr0"

and after a

 

systemctl restart network

it was supposedly working. I was able to ping www.google.com. We’ll see what happens when I start installing LXC Containers.

Using Flatpak to install LibreOffice on Fedora 24

After someone told me that a PDF I’d created in Calligra Office was illegible and having issues with spreadsheets loading slowly, I decided to install LibreOffice. However, rather than go with the version in the repos, I decided to go with Flatpak – which allows for a more advanced version via the usage of runtimes. First, I had to install Flatpak:

[code language=”bash”]

sudo dnf install flatpak

[/code]

Then I needed to install the runtimes. The LibreOffice page uses the –user tag, but I think that is just for installing it just to yourself rather than for the whole system. So I am omitting that.

[code language=”bash”]
wget https://sdk.gnome.org/keys/gnome-sdk.gpg
flatpak remote-add –gpg-import=gnome-sdk.gpg gnome https://sdk.gnome.org/repo/
flatpak install gnome org.gnome.Platform 3.20
[/code]

That took a bit and said things like “Writing Objects” on the terminal. Eventually that was done. Then it was time for LibreOffice. I grabbed the file from the website, then:

[code language=”bash”]

flatpak install –bundle LibreOffice.flatpak

[/code]

After doing that I did an alt-F2 to see if it could launch like a regularly installed application. It did not show up. Perhaps Flatpak only works well with Gnome for now?

[code language=”bash”]
flatpak run org.libreoffice.LibreOffice
[/code]

Worked, though.

In the future if I want to update it, I need to run:

[code language=”bash”]
flatpak update –user org.libreoffice.LibreOffice</pre>
[/code]

I do have to say that I’m disappointed it doesn’t appear in my alt-F2 menu.

SuperMario is at Fedora 24

My main computer is now on Fedora 24. This time around I only had to uninstall HDR Merge (which was from my COPR and I hadn’t built a Fedora 24 version yet) and OBS-Studio because there isn’t a Fedora 24 package for it yet. Not bad.

After rebooting, I didn’t have graphics. Then rebooting once more kicked the akmod into gear and now things appear to be working well. 2 more computers left to upgrade to Fedora 24 – the VM server and the Kodi living room box.

Fedora 24 is out!

Fedora 24 was released yesterday. I updated Daisy, my big laptop, first since it’s not critical. If the update broke something I wouldn’t care. The only hitch it had was that I had to reinstall the RPMFusion repos from the RPM for Fedora 24. Otherwise it was saying that one of the packages wasn’t signed and refused to do the upgrade. Probably has something to do with the fact that for the last release or two, RPMFusion wasn’t exactly in the best of conditions. I’m currently updating my netbook (Kuribo), but that’s more of an all-evening affair since it’s just running on an Intel Atom. There are three more Fedora machines in the house – SuperMario, TanukiMario, and BlueYoshi. I’ll probably save the livingroom Kodi (BY) for last since everyone in the house uses that to watch TV.

So far I haven’t noticed too many changes on KDE, but the icon indicating that I have updates is different. That seems to change with every release (or at least every other one).

A look at the many flavors of Korora

It’s been a long time since I looked at a new Linux distro. Long time readers know I used to review Linux distros a few years ago. But one of the maintainers of Korora posts to the Fedora Planet feed. (They may say constantly that RSS feeds are dead, but some of us still use them!) Korora (which used to be based on Gentoo) aims to create the ultimate desktop user’s Fedora setup. So they tweak Fedora a bit and add some repos like RPMFusion out of the box. Since I do a lot of this every time I install Fedora anyway, I may want to just install Korora next time I do an install. So I wanted to look at Korora and also use it as a chance to take a look at what’s going on nowadays in the non-KDE/Plasma desktops. Also, since Korora 23 should be coming out within the next month or so, it should help me see how Korora upgrades compare to Fedora (*should* be the same). First the install:

The only bad thing I could find with the install is not in that video. Certain desktops (XFCE was one, I’m not sure I think another was Mate?) have their “done” buttons obscured by the theming. Luckily I knew to click somewhere in the top left corner. Hopefully that’s fixed for the future.

For some reason, my XFCE video won’t embed. You can watch it here.

Upgrading to Korora 23 didn’t really change much – the desktop environments increased in version numbering, but I didn’t see anything drastically different.

I created my first RPM! And have a copr repo!

It’s the intersection of three of my hobbies – computers, Linux, and photography! Ever since I learned how to compile source code from the net, about a decade ago, I’ve wanted to create RPMs to help those who aren’t comfortable with compiling or simply don’t want to bother with compiling. But, for some reason, RPM creation was always something I struggled to get right. Nearly once a year I’d try and do it, but always failed. But recently, when reading the instructions on how to do it, it just clicked.

And there was software I wanted to use that didn’t have an RPM already. So I grabbed the source code to hdrmerge (which allows one to create high dynamic range photos – I plan on blogging about that some time in the near future) and worked to create an RPM. Then, to make it easy for people to update as the software updates, I created a copr repository!

You can find the repo here.

Also, it turned out to be more than RPM creation 101, it was a 200 level class as I had to figure out how to create a patch to make it work on Fedora. The next step might be for me to get it into the official Fedora repositories since there aren’t any licensing or patent reasons it couldn’t be in there.