October 2020 Summary
Disaster struck in October, with a failure of the hard drive I use for project data storage on my workstation!
Fortunately, I did have some backups, and I was able to recover most of the data anyway. But it was certainly a shocker, and it took me quite awhile to recover. This also made me think that I need to make some changes to how I store the data to provide more redundancy and quicker recovery from a failure like this one.
I also had problems due to upgrading to Ubuntu Studio 20.04, which did not go very smoothly. Some applications, notably VLC, stopping working properly.
On a suggestion from Andrew Pam, I created a new BTRFS / RAID-1 project drive, implemented with two 1-Terabyte drives, and recovered the project data to that, but I’m only just learning how this works.
The Next Adventure Begins…
After much hemming and hawing, I have finally installed the new Blender 2.83 “Long Term Service” package on my workstation.
Due to all the legacy files, and particularly the elimination of the “Blender Internal” rendering engine, this has to be a parallel installation. Also I’m about to upgrade to Ubuntu Studio 20.04, and I’m not sure which Blender version is standard in it (it’s something after 2.8, I’m pretty sure).
And then of course, I have a few legacy files which broke with 2.79b, which is what I’m using to complete “Lunatics!” episode 1 rendering. So I also have to keep my local installation of Blender 2.71, which was the last version that worked with all of those older files (or at least, the last one I proved they work with):
Needless to say, having three different versions of a package installed at once can be confusing, so I also designed three custom launcher icons to differentiate them.
Since 2.79b is still going to be my main tool until this episode is finished, I have used the original Blender color scheme for that one. And for the other two, I used color schemes that my synesthesia-addled brain associates with the version numbers. The idea being to make it easy for me to unconsciously associate the icon with the correct Blender.
Also, to eliminate the dependency on Ubuntu Studio’s current choices, I have installed all three versions of Blender from the original Blender.org binary packages, so upgrading won’t change which I have available (well, technically, they should give me an extra one, so there will actually be four different versions on my system).
I will have to spend some time getting up to speed with 2.83. There are a lot of UI differences, as you probably already know, and a few structural differences (like “Collections” — which apparently replace “Layers” and/or “Groups” and have some new capabilities I don’t fully understand).
My plan going forward is to:
- Eliminate the 2.71 dependencies for all new work (maybe already done?).
- Continue with 2.79b for modeling and rendering in Ep. 1, and possibly for Ep. 2-3.
- Start out using 2.83LTS for post-compositing from Ep. 1.
- Experiment with replacing Blender Internal rendering with Eevee rendering with new models.
- For Ep. 4 and beyond (maybe earlier), convert existing models to Eevee.
Since only the “Press Conference” seems to require 2.71, and it’s already rendered, I may have already eliminated that dependency. I’m just keeping the older version for rework (there is some to do on that sequence).
Transitioning to 2.83 for compositing loses me nothing, because aside from a few experiments, I haven’t set up any post-compositing files. I’ll obviously have to learn how to do node-based compositing in 2.83, but I won’t be repeating any work.
It’s also possible that compositing with the latest Blender will make it easier to combine render output from all three versions (though possibly this wouldn’t really matter — I assume they can all read Blender’s “Multilayer EXR” format). I say “post-compositing” to distinguish from the compositing that is included in my shot rendering files, which is just meant to be a first-pass approximation. These are the ones I’ve been showing. In post production, though, I’ll be able to do some tweaking on the colors, fix some Freestyle glitches, and make other minor edits. It’ll be subtle, but with luck, the result will “pop” a bit more than the default rendered version.
The only models that are really material-complete are the settings on Earth and the Soyuz spacecraft. I might be able to start out with Eevee materials on the “Space Station Alpha” model, and so incorporate 2.83 into Ep. 2. But it’s really going to depend on how smooth the transition is.
The real challenge will be transitioning the character models to Eevee, without the change being noticeable. I will have to do some experiments to convince myself that can work. Probably, the easiest thing will be to keep rendering characters with 2.79 through the end of the “Pilot Arc” in Ep. 3, and then transition when working on Ep. 4.
After writing this update, I’m about to upgrade Ubuntu Studio to 20.04 LTS, which I’ve been putting off for months now. It’s well past time I did this. I’m particularly looking forward to seeing what new versions of Kdenlive, Audacity, Inkscape, and Ardour are included and how well well they work for me.
I’ve attached my custom Blender launcher icons. The PNGs are 256 pixels, to scale well for the binary series of sizes (32, 64, 128, 256). But it’s easy to export any size you want from the SVGs.
History Project (Part 3)
So, it turns out I spent just about all of September examining my personal and project history, and as promised, I want to show an example and talk about how I put it together.
Naturally, these videos are really for me, and they border on “home movies” in terms of how much anyone else is likely to want to watch them. But they are useful to me, because they allow me to keep my own history straight, and pinned down to specific dates.
It’s basically like those “personal logs” they keep mentioning in Star Trek, right?
Unlike real production work for publication, this project has to be very limited in effort — if it’s too hard, I simply shouldn’t waste the time on it. So, I’ve constructed the workflow around things that are easy to do with my software (mostly meaning Kdenlive and various GNU/Linux utilities).
Review of Source Material
In Part 1 of this series, I talked about the collection and conversion process I used to find files on my own computer’s filesystem by date, and then convert them into a useful “slide” or “video clip” to be used in my summary.
I finally automated this process with a Linux/Python/Bash script, which is attached to this post. It’s pretty customized to my own needs, and would need to be adapted to be used by anyone else. But it solved my problem and allowed me to just set it to collect and sort data for a given year. I ran it by importing it into the Python interpreter and calling the functions from there — it’s not a fully independent program. That gave me the flexibility to cope with problems and special needs for different years.
It’s desirable to have the images grouped into clusters to make the summary more comprehensible: reference images should go together as should receipts from purchases, and so on, and there’s always some irrelevant files that get picked up by my automated script.
So it’s impossible to avoid some manual selection and sorting, which I did using Gwenview (my preferred image browser). This is a bit like playing solitaire, which I do to relax while waiting on processes sometimes, and this is more productive, so I can substitute it.
At the end of this process, I have a directory tree with paths like “2017/01/Receipts” or “2017/01/VID/RenderCluster”, which collects the files I want to include.
In Part 2, I described how I was able to get a useful video of production events by simply browsing the material by time and date, and recording that as a screencast. This got me the articles and posts I published during the interval, and also project email communications, which often filled in gaps that weren’t worth publishing.
Since I didn’t keep detailed or systematic text logs then, I turned to using my Facebook feed this way, which, although cluttered with irrelevant material, also includes references to things I would otherwise have put in my log.
Assembling a Summary Video
The video above is a good example of what I then did with this information. It has the following structure:
- Title Card (gives the date and also attribution for the music I set it to).
- Highlights: a text summary of what happened that month (understand I wrote this LAST, after viewing the rest of the video. I also transferred it to a text log.
- Production Log, Patreon Posts, WordPress Posts, Facebook Project Page Posts, Facebook Personal Posts — these are video screencasts of browsing the respective sources, for the respective dates, with extraneous stuff cut out if it’s too much of a distraction.
- Website Statistics (for the period in question, I was using Webalizer to generate these for the ‘lunatics.tv‘ page).
- Business documents: receipts, tax forms I had to fill out (for security reasons, I do NOT include the filled forms, just the forms I had to fill, so I remember I was doing tax preparation then).
- Planning documents.
- Papers I read or at least consulted during that month (research).
- Reference images I consulted.
- Project plans and diagrams.
- Intermediate art, test renders, and so on used in creating production assets.
- Live-action video, Animatics, and Screencasts of production work. Accelerated to 60X, typically — useful because then I can just read minutes and seconds as hours and minutes worked (this worked for much of this material, because I didn’t record most of the worktime then — I’ve also used 1800X for complete worklogs).
- Production output videos, such as animatics, behind-the-scenes videos, and rendered video tests.
- Finally, I end on the log of SVN commits for the month.
I do this in much the same way I do daily video logs: I keep a template with title cards ready and just update it for each period. Here’s what this January 2017 summary looks like in Kdenlive:
At this point, I have the process sufficiently buttoned down to relegate it to a maintenance task that I can do in the “off” time between “real” projects, which is my goal.
I do find that it has already been useful in quantifying when certain landmark events happened on my project, and getting a more realistic idea of where “all that time” went.
It has also been an interesting technical challenge, and I’ve learned a number of things about my software in the process.
So I went ahead with the upgrade, and a lot of stuff broke. Not that that’s entirely unexpected, but it is frustrating.
I’ve also run into some problems that were probably unrelated, like the SVN corruption in the screen capture above (one of the “pristine” copy files is unreadable — I think (and hope) this is a software problem, possibly from a bad shutdown, and not a sign that my hard drive is crashing).
I think the easiest way to solve this problem is to get a fresh checkout of the source tree and copy the new file (the one corresponding to the corrupted database) into the new copy.
But there were also compatibility issues. The new DEB package of VLC barely runs at all on my system and is extremely buggy and prone to freezing as well as not supporting my multi-monitor setup.
I tried installing the SNAP package of VLC, but it’s sandboxed in a way that I frankly don’t see the point of. After all, Linux already has a file permissions system, why do I also need to hide most of my system from the application?
I was a bit disappointed to find that Inkscape is still on 0.92, even though I know that 1.0 has been released. And although Kdenlive is definitely upgraded, I have mixed feelings about it — parts of it are improved, other parts are slower. For example, “Speed” is no longer an “Effect”, but a specialized parameter for each clip. This removes the 2000% limit that the old version had and may solve other problems. But also means that “Paste Effects” won’t include it, and this makes setting a speedup for many clips to be a very tedious one-by-one operation, whereas it was quicker before.
It didn’t even install Blender. I don’t know why, but of course it doesn’t really matter because I’ve already manually installed that (perhaps the upgrade-manager detected this?).
I think the most hilarious bug was that the whole installation won’t proceed if you happen to have a user group named “render”!
For some reason, the “udev” package needs to make a group called “render” and it needs to be a “system user”.
Naturally, I know this, because I did have a group named “render” — it controlled the file-permissions for access to my Blender rendering area.
I solved it by changing the group name to “blender”, which I wasn’t using for anything else. But it does seem like a strange problem.
On the other hand, Aegisub — my preferred subtitle editor — is back in the Ubuntu Studio repository. So that saves me some trouble. And I also found DVD Styler (I had previously installed it from an unofficial packages, but it’s in the repo now).
Also, I apparently completely deleted my Eclipse installation, so that’s a setback for software development. I’ll have to set it up again from scratch.
I was going to review my worklog with VLC, but as mentioned above, I couldn’t get either version of it to play the files.
So I have a lot of fixing to do…
Okay, Now I’m in Data Recovery Hell…
Okay, so that error message in my last post wasn’t due to the upgrade. It was actually due to my hard drive failing. This is pretty disastrous, as it was the /project partition, which is the main spot I store all of my project work.
Also, the error occurred, as you could see in the screen capture, during my attempt to synchronize my working copy with my SVN server.
So far, indications are that it is an actual hardware failure of the drive, though it may be localized to a few sectors on the same partition.
So that’s pretty frustrating.
There ARE backups, although not a nice, single, complete, and recent backup. I have a complete backup of the /project partition from January 2019, a backup of the most important directories (including Lunatics and my development folder) from March 2020. I managed to commit most of my work over the Summer, and the one file that triggered the error is copied to another hard drive now, so I think it’s functional.
I have disconnected power and data from the drive and I’m setting up to try to clone the failing partition with “ddrescue”. I’m also planning to purchase a replacement drive. I was working on a plan to relocate project data onto the MergerFS / Volume Storage system I was creating, and this might be an instigation to hurry that up.
In any case, I’ll probably be spending next week trying to fully recover from this. I could have done better. But I also could’ve done a lot worse, so I’m just glad I made the backups that I did!
I can’t believe how long this takes. All I’m doing is moving data around.
But I can report some successes.
Running ‘ddrescue’ to salvage my data was pretty successful, although it literally took a couple of days to run the program, and I had to wait for a 4 TB hard drive I ordered to arrive, so I’d have a place to put all that data. Less than 1% of the data was corrupted or lost, so that’s not too awful.
The worst part, though, is that it doesn’t seem to be very easy to figure out which 1% was lost! I think the only way is to compare the current state with backed-up data. This is one of the things I’m currently working on, now that I’ve installed the new drive and rescued as much as I could from the failing drive, which is now removed (it’s the 2 TB drive in the foreground of the picture above).
I’ve also started moving some of the data around on my system.
Over the years, various hard drive space crunches had caused me to compromise my original policies on the purposes for the various drives. So I had moved a lot of data from my old “Project” drive to my “Work” drive (which is supposed to be a sort of scratch drive, like /tmp, but not erased on each reboot) as “temporary” storage (but it’s been there for years).
Another drive, “/masters” is specifically for collecting the original, uncompressed video for projects, before converting them to distribution formats. But it has had some masters for projects I really should be storing offline, instead of taking up space on my live system.
As I’m now relocating /project to a larger partition, I can reconsolidate some of that data and make other reforms.
I’m also trying to figure out just the right sequence to make the most of my recovered data (which may be slightly corrupted) and the backups (which are not corrupted, but are out of date). And unfortunately, just copying the data tree takes about and hour, so this is a slow process. Especially if I make mistakes, and I have made a few.
Following up on advice from a patron (Thanks!), I’m looking into replacing the partition with a redundant pair of drives using the “BTRFS” filesystem. I also plan to set up rsnapshot mirroring to another drive and a mirror of the first daily to my file server on another computer. This will mean I’ll actually have quadruple redundancy in the future: two completely-up-to-date drives, and then daily backups (which also allows for retrieving recently deleted files).
So this amounts to a lot of housekeeping and not much production, I’m afraid. But it’s got to be done.