October 2018 Summary
- Animated Georgiana signature shot from TB-TouringBaikonur.
- Animation tests for Soyuz interior from LA-Launch.
- Mech animation tests for LA-Launch and SF-SoyuzFlight (the liftoff).
- Japanese transcription from PC-PressConference, and lip-sync tests.
- Started narration script for “No Children in Space” audiodrama.
This week, I believe I have successfully added a Japanese-language breakdown to Papagayo-NG, which is our lipsync editor. I’m also currently working on documentation and packaging for the program.The Japanese input allows for either kana (as shown here), or romanization, which is easier to type on an English keyboard. The code is pretty simple, since Japanese syllabaries are a very phonetic system. We don’t support (or want to support) kanji for this application, since it’s all about the sound and speech movements.
There are probably refinements that could be made, but I think it’s good enough to script the small amount of Japanese dialog in our pilot episode (which is why I’m doing this now – it was this or use the “pidgin English” breakdown, and I thought I’d see how hard it would be to just do it right. Turns out it wasn’t that hard).
I don’t have to do anything for the Russian dialog, which was implemented a long time ago!
This is my first open source code contribution in a long time. Feels good to be doing it!
Papagayo allows you to type text in natural language (more or less), and does the breakdown to phonetics automatically. I am a little amazed that this works at all in English, which has very inconsistent spelling rules.
But it was pretty easy in Japanese, as long as you don’t try to use kanji (there are some edge cases, including some not yet implemented, but it works pretty well already).
These “phonemes” (using representation codes developed at Carnegie Mellon University, originally for speech synthesis) are then mapped to an even smaller set of values defined by one of the available lipsync animation schemes (Papagayo currently supports two: “Preston-Blair” and “Fleming-Dobbs”). This works because many sounds look very much the same in terms of mouth position — one of the states in the Preston-Blair set is literally just called “etc”, because it represents so many different phonemes.
After you import the audio and do the breakdown on the text, you can then use the graphical interface to adjust the timing to match. Papagayo doesn’t use speech recognition for this, although there is a proposal to add this feature. It just starts by spacing everything out evenly. But this is still a huge time-saver over having to do this directly in your animation tool.
You grab the green line to adjust the position of the whole phrase, the orange lines to controls the positioning of words, or the pink boxes to control the frame at which the individual mouth positions are triggered. And you can play back the sound with an animated preview face or mouth to test how the lipsync looks.
Once you’re happy, you export the data to an appropriate format, which you then import into your animation software.
Just a little milestone: Last night I checked in the last of the Papagayo “.PGO” and “.DAT” switch files for on-screen dialog in Episode 1. These are the files used to time the lipsync in the episode.
A lot of the lipsync for the first half of the episode was already done, some time ago.This week, though, I finished the Japanese dialog from the press conference, the dialog from the “Suiting Up” sequence, and the scattered bits of dialog in the “Launch” sequence (although quite a bit of the dialog there is off-screen, such as voices heard over the radio).
The Director Animates
Although I’m not the primary character animator on this project (that would be Keneisha Perry), I do like to get some practice, and I do a lot of the mechanical animation, which this is similar to.
To get this animation, I actually animated the pen first, and then attached Georgiana’s hand IK controller to it using “Copy Location” and “Copy Rotation” constraints in Blender. I then animated the rest of her body to follow along with the motion. Then the moving camera was added — the camera follows a more-or-less continuous circular “orbit” throughout this montage of activities in Baikonur prior to the launch.
Bear in mind, part of what made this relatively easy was the excellent character rig due to Keneisha Perry and the Rigify extension which she started with.
This is still a GL previz, and the set and extras have not yet been added (the blue blocks are locations for extras, two of whom also provide the transitions from the previous shot and to the next one). I’ll be dressing and linking the set soon to complete this shot.
I’m trying out a video post using a private link from Vimeo — this is how I plan to put up early streaming for the episode release, so I figured I’d better test it out.
Animation and Signature Rig: Terry Hancock
Character Rig: Keneisha Perry
Character Model: Bela Szabo and Keneisha Perry
Character Design: Daniel FuMusic: “Orient” by Lulo (Raúl Martín) from album “Collage” (2009), CC By-SA 3.0.
I think this angle might be the best of the ones I’ve tried. Hiromi is supposed to be tightening Georgiana’s seat belts prior to launch in this shot, before getting into her own seat. There’s still a few interference problems here — the helmet is sticking through the headrest a little, and Georgiana’s comms-cap is not following her head, which is a rigging problem. I also think we’re going to have to remove the backpack from Georgiana’s suit. She wouldn’t be using it on the Soyuz flight.
But I love the visor reflections.
I’m also kind of intrigued by the posture here — Hiromi is actually kneeling on the center seat for this shot, although it almost looks as if she’s standing beside Georgiana.