A Quick Update on my use of btrfs and snapshots

Because of grad school, my work on Snap in Time has been quite halting – my last commit was 8 months ago. So I haven’t finished the quarterly and yearly culling part of my script. Since I’ve been making semi-hourly snapshots since March 2014, I had accumulated something like 1052 snapshots. While performance did improve a bit after I turned on the autodefrag option, it’s still a bit suboptimal, especially when dealing with database-heavy programs like Firefox, Chrome, and Amarok. At least that is my experience – it’s entirely possible that this is correlation and not causation, but I have read online that when btrfs needs to figure out snapshots and what to keep, delete, etc it can be a performance drag to have lots of snapshots. I’m not sure, but I feel like 1052 is a lot of snapshots. It’s certainly way more than I would have if my program were complete and working correctly.

So I went in an manually deleted March through June of last year. That dropped the number of snapshots to about 619. I’m pausing there because these deletions are “no-commit” deletions meaning the btrfs does not immediately delete the snapshots (as that would be computationally expensive). Instead it marks them for deletion at its convenience. I’d prefer not to break things, so I stopped at just over 400 snapshots.

Interestingly, the amount of space shown by btfs fi show /home did not chance. It continued to show 1.91TiB used by the FS and 1.99TiB used in general (perhaps taking metadata into account?) I’m not sure if that’s because of the no-commit thing and it’ll be drastically different tomorrow or if that’s because my file system usage is fairly static over the long term – in other words, there aren’t some large files I’ve deleted that are hanging around because of snapshots. Time will tell, I guess.

Also, I wonder if recent kernel changes have fixed the issues where df-h wouldn’t properly account for space being taken up by btrfs because I do see my Available space increasing from 766G to 768G. Then again, maybe df -h wasn’t necessarily fixed for snapshots, it’s just that since I store both snapshots and home under /home that somehow that’s making it fix itself? Alternatively or perhaps in addition to this, perhaps df -h responds more quickly to subvolumes being deleted.

Author: Eric Mesa

To find out a little more about me, see About Me