Earth to Venus by Don Slepian

Howdy! Ive just released a video for a new song by don slepian, turned out pretty nifty i think! for anyone interested in how i put this together i put together a little write up of the process below

2 Likes

The Making of Earth to Venus

I began with the audio track that Don Slepian sent me. I opened up the track in reaper and added an extra 10 seconds at the beginning along with a loud clap that I recorded so that there would be an a/v transient for easier syncronizing of everything at the end. I split the channels into two seperate tracks and played around a bit with inverting phase and adding stereo microsecond delays while sending audio out through an expert sleepers ES8 into an oscilloscope set to x-y mode. I finally settled on generating two phase offset sine waves for each channel and then modulated the phase of each as well as the harmonic of each. (i.e. each sine wave was set to drone on 1 note but they each would jump up from the fundamental harmonic of that notes frequency up to 2nd, 3rd, 4th, 5th, and then back down again). If there was no other audio at all what you would see would be a rotating set of shifting lissajous shapes. But with the audio from Don’s song involved what we had was the wave form of the song embedded onto the shifting lissajous shapes. I then used my panasonic lumix G7 to rescan the oscilloscope screen of the waveforms at 4k.

Next I took that footage and downscaled/cropped it down to 720x480 and popped it onto my raspberry pi media player. Why did I record at 4k and if I was going to be downscaling so much? Because 1. Oversampling is always a good idea when converting analog to digital, 2. I wasn’t sure if I was going to be wanting to use the original scope footage at any other point in the compositing process (i.e. running through video_waaaves, spectral_mesh, or just as a layer in the editing process) 3. I wanted to test out various settings on the lumix for high resolution video captures off of the scope for future captures. At this point I was also still unsure about what final resolution/aspect ratio I would be using for the project so I wanted to have the highest resolution possibles at each stage to work with.
My pi media player is a pretty simple affair. For stuff like this where I just have 1 media file that I have to play with I just use omxplayer from the command line to launch things. If I had multiple clips that I would want to trigger and loop I would use an openFrameworks project that I set up to work with midi controllers so that different note values trigger different samples. I used several instances of these for the Recursive News project. I’ve found using pis for getting digitally sourced footage into analog zones is the most reliable and easy method than any kind of dongle/converter chain going from a computer.

Now I set up a camera feedback chain. I set up a sony dsr 250 camera running into a panasonic mx70 video mixer running into a viditech vidimate video enhancer running into a sony flatscreen LCDtv. I chose the sony camera because it has very fine tuned iris, zoom, and focus controls on the lens and I wanted to be able to easily switch between a lot of different feedback modes. I chose the panasonic mx70 mainly because I wanted to have the individual channel YpBpR controls for having even finer tuned control over the patterns as well as the ability to control the speed and shapes of the movement using the ‘afterimage’ fx control. I chose the vidimate because I wanted to be able to set things up for very blobby feedback and still be able to have sharp and distinct outlines for things as well as having one extra level of analog controls for brightness, saturation, and hue control in the mix. I chose the sony flatscreen because I wanted to utilize its upscaling features for extra moire in the feedback chain. Even though the internal processing was all happening at SD resolutions, if the camera sensor and the TV in a chain are working at higher resolutions that gives a different kind of feedback than lower resolution camera sensor/TV combos. Oversampling!

To record the video feedback I used the same lumix g7 and recorded the output from the mx 70 off of a sony trinitron PVM. Trinitron PVMs are reliably my favorite CRTs to capture off of, the phosphor pixels on thems have a very nice quality and the CRTs themselves have an excellent amount of controls for fine tuning captures. At this point I was still unsure about what my final resolution would be and I recorded the CRT ‘pillarboxed’ in 4k so that I would have the full vertical resolution to work with. My shooting set up involved taping a rigid piece of cardboard from the trinitrons top hanging all of the way over to the lumix and then draping blackout fabric over the cardboard so that there would be no possible reflections onto the screen I was rescanning from.

Once I finished up the camera feedback portion of the day I brought the feedback rescan footage into my computer to run through both Video Waaaves and Artificial Life. At this point I realized that I should have made a firm decision on what my final resolution would be before I got started as I got mildly side tracked by trying out different aspect ratio things and resolutions with running my rescan feedback footage into video waaaves using vlc and ndi. I ultimately decided to not give a fuck quite yet and ran both the footage and video waaaves/artifical life at 1280x960. When in doubt I always default to 4:3 resolutions because I like the composition of that ‘nearly a square’ canvas more so than 16:9. I have a tendancy to work with 4 fold symmetries fairly often and that kind of thing doesn’t translate so well to 16:9.

To go from my rescan footage to video waaaves/artificial life I used NDI virtual input on my windows 10 machine to grab the footage from VLC into my programs. To capture the output from vw/al I used OBS. I did a couple run throughs using each vw and al and made some time line notes of when I wanted certain modes to come in and out so that I could just get 1 solid take from each one to use in the final editing process. When making a video that I know will involve editing from multiple clips I do my best to have the smallest possible number of clips to to work with as it makes the editing process that much easier.

Finally I had all of the footage recorded and all that was left was to assemble everything in premier pro. I had 4 layers of clips to work with, the top layer was just scope and camera feedback, the second layer was scope and camera feedback with luma key so that I could layer it over the vw/al footage (I am aware that this was mildly redundant but it made some of the organization easier on my end), the third layer was artifical life, and the fourth layer was video waaaves. I brought in the music track and was able to sync everything up easily using that handclap transient. I then watched each section through with the music and added timeline notes of when I wanted each section to come through and how I wanted the layers to interact with one another. It was about at this point that I realized that I had goofed up some of the aspect ratio stuffs when recording vw/al footage and had to try and stretch and reposition things to get them to properly match up. I actually consider this to be a happy accident as having the various video layers be offset in various ways radiating out from the center made for a really nice drop shadow feel when compositing the layers but in the future I will make an effort to be more rigorous with aspect ratios and resolutions. I did very minimal processing to each of the layers, small amounts of color correction here and there plus some blurring and sharpening whenever it seemed necessary to make the layers feel as though they where living in the same ‘space.’ I made about 3 total drafts during this process and with each render I would watch the entire thing on my largest monitor (I currently am using a philips flatscreen lcdtv as as secondary monitor. It is a bit low quality in many ways but I wanted to view the output on a very large monitor so that anything kinda wonky with pixels or whatnot would jump right out at me) and took notes on what kind of tiny tweaks I wanted to make to transitions. I mainly wanted to use premier just for compositing and transitions between layers, my goal with video projects like this is to steer the whole project towards the most minimal amount of post processing. I feel like DAW and DVW have enouraged a lot of folks to work with a “I’ll fix it in post” approach to projects but I have found that it often times takes significantly more effort to fix something in post as opposed to fixing it before you even record anything.

Things I learned from this process:

  1. In the future whenever I’m going to be working with going back and forth from analog sd video to digital hd video resolutions I will just choose 1 aspect ratio at the beginning of the project and work with that. The sony camera is capable of sending 16:9 aspect ratio signals through analog video cables, the sony LCDtv involved can take 4:3 signals and stretch them to 16:9 and the sony trinitron can display an hd letterboxed signal so it would be quite easy to work with. The only real thing to work with would be that I would either want to make sure I have any files Im playing off the pi media player deformed in the correct way so that they will fill the screen when I run them through the mx70 and then into the LCDtv. If my footage for the media player is 16:9 then just squishing the horizontal pixels to fit in 4:3 seems like it would work out perfectly. If my footage was 4:3 then I would probably clip the top and bottom and stretch the horizontal pixels by about 25 percent to make sure it both fills the screen and stretches out properly.
  2. I might try doing the final composite as a rescan off of either an LCD or CRT. I feel like this could save a step of color correction and could be a pretty easy way to upscale the finished video to 4k.
  3. I will likely try out using more computers (and fixing up some stuffs in my vw NDI support for windows) when running high resolution video through any of my programs to try and save as much processing power as possible. I can try playing video at higher resolutions on my laptop and using ndi scan converter to send that over to vw on my desktop computer for processing. I will probably try out some different methods of screen capturing as I get the feeling that OBS is not the most efficient way to do screen grabs on windows. It is very nice quality tho!
4 Likes