Today you learned about

Hi think it would be great to have a thread of daily triumphs of what we learn in video art!

Today I was researching more what chroma and luma are. What did you learn today?


most recently ive been working more so in general signal flow worlds, ive got a set up for adding audio reactive input into waaave_pool so getting some functional knowledge of how to get useful low latency data from ffts running at audio rates into a 30hz video feedback zones. its a little bit on the dry sides for me but i do have interest in future experiments with ffts for video and audio synthesis projects so it is pretty much always super handy to have a grip of algorithms and hacks for working with a set of signals


Today I learned about aubio, an open source audio analysis and labeling tool kit. It’s capable of beat/pitch detection and streaming MIDI converted from audio input. I spent part of today looking for some low latency guitar-to-MIDI solutions, and it looks like it could be a great open source option for that, but also for generating data for visual parameters. Thinking about putting something together with Electron to connect to WebGL.

Also interested in using it to auto-slice audio files, could be powerful for creating and labeling sample libraries.


TIL that there’s, like, a 10 Channel Audio Mixer built into the Roland V1HD. Total Game Changer.

1 Like


When you establish a 50/50 mix of any original digital signal with a negative image of the same signal after passing though the encoding process, the resulting visual differences are the errors generated by the encoding.

source: Avid DNxHD white paper (in our wiki), p.7

This struck me as a brilliant idea (I don’t know shit about editing). I did this to compare the effects of different encoding parameters with ffmpeg. I used ffmpeg itself to extract the same frame from the original and from different encoded versions. Then in Gimp I opened those stills as separate layers. I placed the original version at the top, applied “Linear Invert” and reduced opacity to 50% to it. Then I compared the artefacts visible in the resulting mix with each underlying layer.

edit: more at


As a video editor who’s spent far, far too much time looking at DNxHD/ProRes white papers, I can’t believe I haven’t thought to mess around with this! Really cool, regretting not jumping on this forum earlier! Love this type of website compared to social networks etc. anyway :heart_eyes:

Do you mind if I try expanding on this idea to an extent by intentionally creating errors during encoding, and then applying the same process as you? Quite intrigued


Oh, that’s the way, uh-huh uh-huh
I like it, uh-huh, uh-huh



Yes!!! Have reviewed both white papers and tbh it’s gonna be a lot of trial and error, so I’ll queue up a bunch of labelled test encodes that might have interesting output and leave them on overnight haha. Amazing research from you though, I wouldn’t have a clue what I was doing with formats if this wasn’t my day job.

I recall coming across some earlier formats with interesting output with incorrect parameters in professional/broadcast use, there are so many formats in total it’ll be interesting to scour the white papers and continue tests. It was honestly so fucking difficult to deal with ingesting/encoding media compared to what’s common now using using mixed P2 card and HDCAM for a PAL MPEG-2 digital broadcast, I’ll be surprised if I don’t find anything lol

  • Never ever use 1-to-2 splitter cables (or adapters) to “mix” different signals into a single input (only use them to split a signal to 2 outputs). Equipment could get damaged. A likely explanation

  • If using guitar pedals, only use “mono” (TS) jack cables. Some pedals misbehave when used with “stereo” (TRS) jack cables. A likely explanation

(I’m cheating: I didn’t really learnt today about these)

1 Like