Video processing on raspberry pi environment

for folks interested in digging more into the details of using rasberry pis as a video processing/synthesis enviroment i put together a template project set up to run on any of the video synth images i’ve got out there for the waaave_pool distro. i don’t really offer any tutorials on how to configure the raspbian environments for these because if your motivated enuf to want to take a stab at it yrself then you most likely can make it happen without external help, also its would be boring and lengthy and not exactly providing any directly helpful information related to video processing and simply repeating a lot of information that already exists out there on the oF website.

( tl;dr for that is: Raspbian stretch and 3bs are kind of my favorite for these purposes, pi4 and buster have gl issues with performance that i don’t want to dig into, yall might want to work with some older versions of oF for arm cpus as a result, configuring everything for addons can get a little wiggly sometimes and doing some research on MAKEFILES will be helpful)

but yeah i’m not sure if i figured out how to properly format the code blocks for these posts (i’m pretty sure i did not actually!) but heres the main infos from the code zones. i’ll be setting up an online class to go through this at some point in the future, will be asking for like 20usd donation for participants and would like to have at least 5 folks signed up to go though with it, and would like to try out live lab for this purpose. i am considering a sign up format for some of my classes where folks first sign up for the class and then once we reach the minimum number of people interested we pick a date and time, curious if folks have any thoughts on this as well? seems like it could both simplify some things and complicate some other things, my main thing is i wante to avoid doing classes of less than 5 folks because then it is unlikely that i will get enuf donations to make it worth my time and energies and also all my classes will require a good deal of participation and interaction between folks. less than 5 ppl is essentially just moving over into my tutorial/consulting zone rates anyway but i don’t know, just want to try some different things out and see if they stick to the wall!

here is the main c++ side of the code in ofApp.cpp

#include “ofApp.h”

#include “iostream”

float az = 1.0;
float sx = 0;
float dc = 0;

int fb0_delayamount=0;

//dummy variables for midi control

float c1=0.0;
float c2=0.0;

int width=0;
int height=0;

void ofApp::setup() {

//this clamps the frame rate at 30fps
//most usb cameras only work at 30 or 60
//but it might be hard to get anything complex
//to run at 60fps without framedropping or insconsistent
//frame rates

	//this makes the background black

//run this to hide the mouse cursor on the screen

//so we are drawing a screen size of 720 by 480
//however there aren't really any usb cameras that
//work at that resolution so we grab the closest we can

//we need to initiate the camera

//and we need to allocate memory for the framebuffer

//this clears the data that we just allocated
//we don't have to do this for the code to run
//but if you don't clear the data it will be full
//of random mishmash that was drawn to your screen in
//recent history which can be fun to play with



//this is telling the c++ where to find the 
//shader files.  the default path it is looking
//into is bin/data, so this is short for
//"look into bin/data/shadersES2 and see if you 
//can find two files named shader_mixer.vert 
//and shader_mixer.frag

// print input ports to console

// open port by number (you may need to change this)
//midiIn.openPort("IAC Pure Data In");	// by name
//midiIn.openVirtualPort("ofxMidiIn Input"); // open a virtual port

// don't ignore sysex, timing, & active sense messages,
// these are ignored by default
midiIn.ignoreTypes(false, false, false);

// add ofApp as a listener

// print received messages to the console


void ofApp::update() {

//we need to update the camera every time to get a new frame


void ofApp::draw() {

//begin midi biz

//we store midi input messages in a buffer and then
//each time this code runs per frame we look at all the
//messages in the buffer and assign that information to
//control variables
for(unsigned int i = 0; i < midiMessages.size(); ++i) {

	ofxMidiMessage &message = midiMessages[i];

	if(message.status < MIDI_SYSEX) {
		//text << "chan: " <<;
        if(message.status == MIDI_CONTROL_CHANGE) {

            //midi control change values default range
            //from 0 to 127
            //here you can see two ways to normalize them
            //c1 we will remap to go from 0 to 1 for unipolar controls
            //c2 we will map from -1 to 1 for bipolar controls
               //   c2=(message.value)/127.00;


//end midi biz


//so first we will draw everything to a framebuffer
//so that we have something to ping pong for feedback
//everything within the framebuffer.begin() and 
//framebuffer.end() will be drawn to a virtual screen
//in the graphics memory, it won't be drawn to the
//actual screen until
//we call framebuffer.draw

//then we call the shader to being
//in between shader.begin() and shader.end()
//all the nitty gritty of what is happening per
//pixel is happening over in the shader zones
//so basically all we do are doing on the 
//c++ side is binding textures and sending some
//variables over

//calling .draw() on anything within a texture
//binds that texture to the gpu
//the first one you draw is automatically given the
//name tex0 over on the shader side

    //but if we want to do camera input and do feedback on it
    //we need to also send the previous frame over to the 
    //shaders as a texture, this is the format for doing so
    shader_mixer.setUniformTexture("fb0", framebuffer1.getTexture(),1);

	//if we want to have input from our midi devices or
	//from our keyboard we need to send these variables over
	//as well.  setUniform1f is how we send over single float
	//numbers.  we can also call setUniform1i to send over integers
	//or setUniform2f to send over a vector with 2 components
	//and so on and so on



//here is where we draw to the screen


//this is called a ping pong buffer
//its the fundamental way we do feedback using framebuffers
//we draw our output screen to an extra framebuffer object
//and then pass that back into our shader to process it

//i use this block of code to print out like useful information for debugging various things and/or just to keep the 
//framerate displayed to make sure i'm not losing any frames while testing out new features.  uncomment the ofDrawBitmap etc etc
//to print the stuff out on screen
string msg="fps="+ofToString(ofGetFrameRate(),2);
// ofDrawBitmapString(msg,10,10);


void ofApp::exit() {

// clean up


void ofApp::newMidiMessage(ofxMidiMessage& msg) {

// add the latest message to the message queue

// remove any old messages if we have too many
while(midiMessages.size() > maxMessages) {


void ofApp::keyPressed(int key) {

//here is how i map controls from the keyboard

//fb0 x displace
if (key == 's') {sx += .0001;}
if (key == 'x') {sx -= .0001;}

//fb0 y displace
if (key == 'd') {dc += .0001;}
if (key == 'c') {dc -= .0001;}


and here is the shader sides of things in shader_mixer.frag

precision highp float;

uniform sampler2D tex0;

uniform sampler2D fb0;

uniform float fb1_mix;
uniform float fb0_xdisplace;
uniform float fb0_ydisplace;

varying vec2 texCoordVarying;

//here is how to write a function
//“vec3” defines what value gets returned, “rgb2hsb” is how
//this function gets called, “in vec3 c” means it takes in
//a single vec3 as an argument and passes it into a variable
//named c.
vec3 rgb2hsb(in vec3 c)
vec4 K = vec4(0.0, -1.0 / 3.0, 2.0 / 3.0, -1.0);
vec4 p = mix(vec4(, K.wz), vec4(, K.xy), step(c.b, c.g));
vec4 q = mix(vec4(p.xyw, c.r), vec4(c.r, p.yzx), step(p.x, c.r));

float d = q.x - min(q.w, q.y);
float e = 1.0e-10;
return vec3(abs(q.z + (q.w - q.y) / (6.0 * d + e)), d / (q.x + e), q.x);


//if we want to work with hsb space in shaders we have to
//convert the rgba color into an hsb, do some hsb stuffs
//and then convert back into rgb for the final draw to the screen
vec3 hsb2rgb(in vec3 c)
vec4 K = vec4(1.0, 2.0 / 3.0, 1.0 / 3.0, 3.0);
vec3 p = abs(fract( + * 6.0 - K.www);
return c.z * mix(, clamp(p -, 0.0, 1.0), c.y);

//main() is the main function being called per pixel.
//every bit of code in here is being called simultaneously
//for each pixel on the output screen. it is important
//when doing real time video processing to figure out
//what operations are best performed in the shader and
//which operations are best performed outside of the shader
//from a computational standpoint, say we want to have
//sd video resolution at 30fps. sd video is 720x480
//(i mean not quite but thats the best way to work with it
//from within the raspberry pi) so thats 345,600 pixels per
//frame, which means per second we have this code running
//10,368,000 times per second. if you work out what things
//need to happen only once per frame vs the things that need
//to happen on a pixel by pixel basis that means you can
//optimize your code by having an operation only perform
//30 times per second vs 10,368,000 times per second. for
//small programs and small resolutions you might not
//notice much difference but when you are trying out larger
//things and working with multiple textures these things
//tend to add up!
void main()
//important to note that texCoordVarying here seems to be automatically scaled between 0 and 1
//no matter what the actual texel size of the texture is which makes some things easier
//and some things more complicated

//i'm definining a dummy variable that we can put just
//any color into.  colors have 4 variables (r, g, b, a)
//which translates to red, green, blue, and alpha
//rgb translates to how bright the individual rgb liquid
//crystals/leds/rgb phosphors get drawn to the screen
//alpha translates to the 'transparancy' of the pixel
vec4 color=vec4(0,0,0,0);

//here i am defining another color and using texture2D
//to  pull a color out of a texture that has been sent 
//to the gpu.  in the c++ code you will see that 
//the camera input was sent to the gpu by the command
//cam1.draw() within shader.begin() and shader.end()
//when you do that it automatically gets bound to a 
//predefined variable named tex0
vec4 cam1_color = texture2D(tex0, texCoordVarying);

//uncomment this to see what happens
//what pow() is doing is 
//taking the first value in and calculating it
//to the power of the second exponent
//so this is taking in the rgba values of the 
//camera color and squaring each one'
//you can also feed it like .5 if you want to do
//fractional powers
//this is a cheap way to increase/decrease contrast
//but with some color and saturation glitching possible

//cam1_color =pow(cam1_color,2);

//ok i want to do some stuff in hsb now, here
//is how i get those values to play with
//cam1_color_hsb is a vector with 3 components 
//(hue, saturation, brightness)
//so cam1_color_hsb.x is hue
//cam1_color_hsb.y is saturation
//cam1_color_hsb.z is brightness
//all normalized from 0 to 1
vec3 cam1_color_hsb= rgb2hsb(vec3(cam1_color.r,cam1_color.g,cam1_color.b));

//ok if you want to invert brightness uncomment this

//if you want to desaturate everything and have black
//and white

//then we have to convert back into rgb before we do anything else

//some notes on coordinates
//texCoordVarying is variable defined over in the
//vertex shader that specifies what pixel we are drawing
//in this version of gl coordinates are scaled from 0 to 1
//so if our texture we are drawing has resolution of 720x480
//then for the x coordinate 0 means the far left hand side
//of the screen, .5 means 360 pixels over right in the 
//center of the screen and 720 is far right hand
//for y 0 means the top of the screen, .5 means 240 pixels
//down from that and 1 means 480 pixels down at the bottom
//of the screen
//for doing feedback stuffs its good to have a lot of 
//control over where you are grabbing pixels from 
//so i like to define an extra coordinate variable to
//keep track of all of that
vec2 fb0_coord=vec2(texCoordVarying.x,texCoordVarying.y);

	//lets displace the x and y by the amount that we sent in from
 //the c++ code via the uniform variables fb0_xdisplace and

//then lets get the color data out of the fb0 texture
vec4 fb0_color = texture2D(fb0, fb0_coord);

//we can either blend the two colors together
//using mix()
//the final value is the amount that we mix together
//mix() expects only values of 0 to 1 for this
//but it can be fun to experiment with going out of 
//bounds here with negative values or like going way over
//this is a general technique i recommend for experimenting
//seeing how the baked in functions respond to out of 
//bounds inputs is sort of like trying to to circuit
//bending on hardware
color=mix(cam1_color, fb0_color,.5);

//and or try to luma key things together
//remember that cam1_color_hsb.z is the brightness of cam1
//that we calculated earlier

//this means that if the brightness of the
//camera input is less than .5 we will
//key in the framebuffer delay
//something worth noting is that it is good to avoid if statements in shaders because of something called 'branching'
//branching happens when your code compiles, if you have an 'if' statement then essentially your code 'branches' at that 
//point and two versions of the code get compiled, one for if the 'if' statement returns TRUE and 1 if the 'if' statement
//returns FALSE.  this happens in all code when it compiles, the difference in shaders tho is that branching on the c++
//side of things means two versions of the code are compiled for each Frame so at 30fps that is fairly negilible 
//when you are branching on your shader that means two versions are compiled for each pixel per frame so that can 
//get to be an absurdly large number when working with more complex logic situations

//there are techinques to get around branching logic situations where you define a uniform vecXi for X number of cases and where
//each integer component of the vector is either a 1 or 0 and then you can handle the logic in a more 'analog' manner as a 
//linear function of pixel (x y) or pixel (rgba) values.  i'll try and get a couple of examples of this up for when i 
//properly set up a class for this subject!

//gl_FragColor is the color we draw to the screen!
gl_FragColor = color;



Very much appreciated. I will dig into this ASAP

1 Like