Whats everyone working on these days?

and yeah the recur approach is partly what i’m going for, im pretty interested in trying to figure out more of this functionality in gl4 tho

i had issues with performance in most of my previous attempts to implement multiple shader runs in my stuffs plus like all the branching involved in trying to set up a 1 pass shader that bundled a bunch of different but it kind of looks like subroutines could clean up a lot of the issues i had with both of those things…

1 Like

ive been working on something similar which is taking the shape of a sort of visual coding env. so far its just been a way to consolidate disparate bits of c++ written for different projects / artworks over the years, and also de-couple this from an always-unfinished gui project (that’s been rewritten twice!). the result is close to a hybrid vj program and maxmsp (though much simpler).

im focussing on io and hardware right now, but curious about how gl shaders can be abstracted or cracked open into a node structure, as not played with this (vvvv, hydra, touch designer). theres an amazing project Manolo Naon showed us at the last video circuits berlin - i met him at a bar and when we got talking about video circuits he whipped out his laptop and starting hooking together all these oscillators and colorisers! sadly though i don’t think this will see the light of day :frowning:

the simplicity of the isf spec i find nice due to limited gl experience, so would probably approach gl from the perspective of how isf works and could be improved - for example, a slightly abstracted gl language that compiles to different versions / flavours. for more complicated recursion, the problem as i understand it is trying to work out how to inject something into the shader flow with multiple passes - with hydra handling this by compiling a single shader to the gpu.

at the moment im mostly busying myself working out how to most efficiently run updates and threads. the core its roughly three layers: base lib of components that are methodless (ie. only ever mutating bools / floats / ints rather than calling funcs), a layer which standardises I/O nodes (passing vects of floats, textures, sound buffers) and a gui layer for connecting it all together.

for sound processing ive been using ofxsoundobjects and would recommend contributing to that (theres an ofxaudioanalyzer node in my fork which is being used for a music video right now)

bit pulled-in-all-directions during this pandemic but hopefully will have more time to work on it all soon!

3 Likes

@andrei_jay have you seen this? ive not looked into it yet but seems promising: Audio Injector

Blockquote @andrei_jay have you seen this? ive not looked into it yet but seems promising: Audio Injector

haha yes i have, its on my shopping list to mess with once i finish up a couple other projects and have some better default projects set up for audio on the pi! was originally looking for something with just 4 outputs so i could play around with XYZ display controllers from the pi but if i’m gonna mess around with multi channel out i might as well make the whole thing modular and scalable as well, theres a lot of potential for that. also cautiously interested in that for the simultanous audio visual synth stuffs i play with and abandon about 1nce per year

Been researching a lot about composite video sync generators these day.
Started by looking how it was done prior to all-digital solutions, like on the first Atari Arcade games where everything was done with CMOS, mostly progressive sync which is rather simple to implement with basic logic.
Found a few schematic of interlaced sync generators done with logic only, but it was rather complex to adapt it to generate both composite video sync standard.
So this made me pull the trigger to dig more into FPGA, as those are perfect for this task. I made a vhdl code for a simple PAL/NTSC composite sync generator based on a Xilinx Coolrunner II (XC2C64), still lacking of proper resets for genlocking and such, but will be happy to share it when it is a bit more elaborated.
I really like the simulation environment of FPGAs, being able to see what’s happening at each clock cycle is super handy for testing code, I was trying to do a similar thing with an arduino but had to compile and hook it up to a scope at each modification so I could check what was happening on a precise line of the video signal, tried also through the serial monitor but it’s hard to check precise stuff imo.
Anyway, I might start a dedicated thread about video sync generation as it is a broad subject.

Else I’ve been documenting myself a lot on Y/C separation (basically composite to s-video), mainly cause having both luminance and chrominance separated from a composite signal makes it easier to process in analog (and also in digital of course, as it is required to convert the source signal to a RGB/YUV colorspace). I’ve tested a few simple analog filters, which didn’t worked super well, the main issue being that there is still some chrominance in luminance, so some monitors picks up colors even if the level is low, which doesn’t allow for a true b&w image. There were specialized digital comb filters ICs that were made when analog video was still prevalent in mass market, which were analog in/analog out, but most are obsolete and going for a digital solution, better look at what is still made.
From what I’ve found, the comb filters are now fully integrated in video decoder chips as the ADV ones from Analog Devices, which also take care of converting it to digital, which would mean that in order to have an analog YC output, it would require a video encoder too (and a micro or something to set parameters of both ICs in i2c).

@andrei_jay super interested in your video mixer project! Reading video decoder datasheets made me think about how feasible a DIY video mixer would be recently too, but yeah the whole syncing/processing signals in the digital realm is way over my head currently :stuck_out_tongue:
Just checked the jetson nano, looks like it can sync the two video inputs, that’s really nice.

@palomakop I’ve watched your latest work involving ferrofluids, it’s brilliant, the control you have over it makes it look like it’s a living thing, super cool to see some “behind the scene” picture.

@cyberboy666 awesome work!! As we previously discussed, including some kind of stroke to raster scan conversion could be nice for those who don’t have an analog scope.

@autr eager to see more, I really like visual coding environment, I’m not super “code literate” so this helps a lot! That’s also what I liked with FPGAs, as it is possible to program them only using logic building blocks, started learning really basic vhdl in the end as I didn’t manage to make what I want with the schematic part, but this sure helps to understand bit more how it works.

8 Likes

I hope all of these posts get their own thread one day - so much good stuff :fire::fire::fire::fire:

2 Likes

bumping this thread! also started working on something new this weekend.
here is a silly little demo i just recorded with it to promote the chat lol

9 Likes

@cyberboy666 thank you for this absolute gift.

side note: when i saw the thumbnail, i was like “chat scan … is that like a French cat scan?”

1 Like

live coding tidal and hydra in atom

3 Likes

I’m in the (extremely) early stages of putting together some eurorack modules, including a combined sync-generator and ramps module. LZX recently announced a lot of new modules and it looks like they are going to put out a very similar module so I was a bit discouraged but I think I’ll still push ahead with it and see if the market can bear my take on the idea.

6 Likes

text manipulator??

At the moment i work with contour finder and tracking algorithms. The original video material is from CalArts. First in the original video i search manually for sequences that follow the movement of the dancers over several seconds. These sequences are then analyzed by OpenCV and openFrameworks almost in real time and mixed with the original video.

These images are based on a video from CalArts
Dance 2019
Original video: cc by nc 3.0 https://vimeo.com/322918472

7 Likes

Using the optical flow from a video to animate a mesh.

6 Likes

Yes hehe it is/was a text generator and scrabbler.

Was coming along nicely but I got a bit discouraged when I couldn’t figure out how to sync lock it to an external video signal (think text overlay style )

  • still it will be fun / useful without that but I wanted it all haha. Will come back to this project and have some more info about it some time this year I hope :sweat_smile:

yeah sync is dang expensive and cumbersome to work with in diy for sure. cheapo rpi solutions that work with simple colored text on black or green backgrounds for easy luma/chroma keying via an existing mixer has been my hack for anything like that.

Not working on it yet, but I’m thinking about learning Godot well enough to build myself some tools for low-poly, MIDI-controlled generative 3d animation.

EDIT: on hold for now, focusing on Axoloti and building some more hardware - mostly audio - for the next few months.

nice work! cool to see that its just about got to the point for real time cv based stuff to be applicable for video art! that stuff just works so friggin weird and wobbly

Hi Kandid. I like your video a lot. I am curious about your description, “the optical flow”. It is striking because it could mean that the incoming visual information is already stretching and twisting almost like taffy. Or maybe you think of visual information more like a gushing, unstructured energy that is “captured” by a system and then stretched and twisted by the mesh? Apologies if my question seems to be taking your description too literally. Nonetheless it is a great metaphor with roots in theories of perception.

2 Likes

„Farneback optical flow“ is a technical term used for processing of an image stream coming from a (surveillance) camera. I shorten the correct term in the title of the dance video. Farneback optical flow is used to detect where in an image are movements. The result is an bulk of vectors. Then you can use this vectors to disturb a mesh and blend the color from the original camera input back to the mesh.

Yet i am exploring different motion tracking strategies implemented in OpenCV. How i can use them in an artistic way? In some cases these tracking algorithm lost control. What gives interesting results.

Will publish the source code as part of “analog Not analog” later. But need a week to get a version with less programming errors. Thomas Jourdan / aNa · GitLab

Thank you so much for that description of Farneback optical flow (FOF). I searched that phrase, so that I can supplement your information. And now I understand a little more of your description and your video! It is fascinating to me that FOF is a method for predicting direction and speed (of any point/vector of the moving image). That means FOF is not simply displaying a memory of a past location of points of a photographic image, points that are artificially altered and mapped to the mesh. Instead, FOF disturbs the mesh based on predictions of actual future locations (“future memories”, so to speak) to be displayed! If I am understanding correctly, those predictions are causing faithful statistical distortions that are constantly corrected and updated in real time. Brilliant. Thanks.

1 Like