This page is some “behind the scenes” info on how I did it and what I’d do differently next time …
Recorded on my Olympus E-M5 mkII which I’ve not really used for video before but it can record 1080p at 50fps so I figured that would be good.
Lens is a old manual Fuji EBC 28mm f/3.5 on an adaptor (so, equiv 52mm on a m4/3). I picked this one just because it was the right focal length for the shot I wanted and brighter than the zoom at the same length.
I ended up filming at exposure compensation -1.7 to stop the metering getting confused by the dark background.
I shot this in lots of little takes because I wasn’t quite sure how it was all going to fit together and I thought that it’d be easier to fix it up later.
Originally I was planning on chroma-keying (green screening) out my background and putting the slides under my video. Instead I found there was lots of noise in the background so I just chroma-keyed out as much as I could and then put the slides over my video, using “Lighten” to combine them. It mostly worked. There’s a couple of moments where you can see my shoulder through the image.
Microphone setup was an Olympus lapel mic on a long cable, plugged into the camera, but because I kept getting caught up in the cable I ended up dangling it from the ceiling just out of shot.
This kind of worked but it’s a bit “phasey” … this room is very reflective. I taped some foam to the ceiling to try and reduce the phaseyness but it didn’t help much. I didn’t have a proper adaptor for the mic either so it’s using a cute cassette tape shaped headphone splitter.
In total I recorded:
Quite a few bits had several takes where I flubbed one bit or another, plus every video has a few seconds of lead-in and lead-out, so there ends up being a lot left on the cutting room floor!
These were made with my usual
flip.js script and HTML,
on the offchance that I want to present this at a real live conference
some day I figured I might as well have a real live slide deck.
Then I just took PNG screenshots of each slide.
I should really get a better way of doing this together, markdown at the least since writing HTML by hand is terrible.
The stylesheet is meant to look a bit like an old
APPLE ][ era
computer, with an Apple font, inverse characters, green phosphor
colour and a bit of a phosphor glow. It’s not very authentic but
I kinda like it.
I edited it down with Davinci Resolve 16 which is fantastic and free.
There’s certainly a learning curve.
I ended up using the Fusion ‘UltraKey’ component to delete the background, which worked okay. The most complicated Fusion code involved:
You can copy Fusion compositions from one clip to another by going into the Fusion mode, click ‘Clips’ up the top to show a list of all your clips, then select one or more clips to copy TO (with left click, like normal) and then middle-click on a clip to copy the Fusion composition FROM. Then you can tweak the exact alignment of stuff in each clip.
There was a lot of white noise on the recording too, so I reduced that in Fairlight.
I had a nasty moment towards the very end when with less than 24 hours to go to submit the video, on the final render I got this error:
Render job 11 failed as the current clip could not be processed. The Fusion composition on the current frame or clip could not be processed successfully
Which isn’t particularly helpful. I still don’t know what it was complaining about. I eventually found an online tip which said you could make Resolve slightly less fussy about frame errors by going to:
Davinci Resolve »
UI Settings »
Stop renders when a frame or clip cannot be processed
Which may have left a single glitchy frame behind but so be it. The original message was utterly useless so I have no idea what I could do to fix it. Deleting fusion compositions where the error occurred didn’t help.
Dear @Blackmagic_News please hire me so I can find out who wrote this error message and snub them at parties.— nick moore (@nickzoic) August 26, 2020
(for future reference: Resolve 16.2.4.016, on Windows 10 Home 1909 18363.1016)
A huge amount of data get created as part of the video generation pipeline. You’d better have somewhere fast to put it. I ended up having to delete a whole bunch of steam games to make enough room!
20 minutes 40 seconds of video:
I’m happy with what I submitted but I’ve learned a lot so if I was going to submit this somewhere else I’d reshoot it with the following changes:
Buy a decent microphone up front and do a lot more test recording and review. Probably a ceiling boundary mic would work in this little reverby office.
The camera also has a lot of high white noise which might be reduced with a better mic with its own power.
It’s pretty noisy here so most takes were done quite late at night, most of the ones done during daylight were unusable so I should have just not bothered.
I underestimated how much time it was going to take to glue the parts back together, and how tricky it was to get takes filmed on different days to match up with lighting and camera position etc.
I used the Fuji lens because at f/3.5 it’s twice as ‘bright’ as the Olympus zoom at f/4.9 … quite a lot of shots are ‘soft’ because the DoF is small and focus was off by a bit. I used a focussing target but it’s pretty hard to get this sorted out on your own, and I’m not sure I was always quite on my mark.
I’d probably have been better off using the autofocus zoom and making up for the smaller aperture with …
… more lights! I needed more specific lighting. Again, though, tiny office, I’d need to ceiling mount something.
Also I’d stump up for a better black background. I used a halloween tablecloth, but it isn’t as matte as it should be … you can see every now and then a grey area to my right which is a curve in the fabric (I deleted most of these in post, but a few moments got through). Black or green felt might have worked better.
I’d pretend it was a normal conference presentation with slides, prop a monitor up under the camera and grab my remote clicker so I could do the thing in one long take.
At the same time I’d use
ffmpeg to record the slides into a video with
the same time cues as the talking, then just line the camera and slide video up
in the editor, superimpose them and trim where necessary.
Going up to the camera to start it recording meant that every take starts with my big blurry moon face, which means that the white balance and auto exposure is thrown way off at the start of each take, and I had to wait a white for it to settle. There’s a couple of bits where you can see the light changing at the start of a segment when I hadn’t waited long enough for the camera to settle down.
Also this means that the “media library” icons all look like tired ghosts.
Another way to handle the print-on might have been to make the slides with a black background but a transparent corner cut off to show the presenter through. Since the slides are just still images this could be nicely alpha blended in, probably making all the fancy keying unnecessary.
Ironically, the slides were written in HTML using
It turns out that Davinci Resolve includes a visual language “Fusion” for manipulating videos, and so at least part of the production was done in a visual language!
This annoyed the heck out of me because the project is an opaque thing, there’s no way to be sure you’ve backed it up against accidentally deleting the wrong bit of a timeline.
Really, what I want is a GUI for writing
If I’d have a heap more time I’d have bribed a teenager into doing line drawings of the photos and converted them to green glowing lines.
Thanks to Ryan at Next Day Video for checking over the video and help understanding what encoding to use.