Been researching a lot about composite video sync generators these day.
Started by looking how it was done prior to all-digital solutions, like on the first Atari Arcade games where everything was done with CMOS, mostly progressive sync which is rather simple to implement with basic logic.
Found a few schematic of interlaced sync generators done with logic only, but it was rather complex to adapt it to generate both composite video sync standard.
So this made me pull the trigger to dig more into FPGA, as those are perfect for this task. I made a vhdl code for a simple PAL/NTSC composite sync generator based on a Xilinx Coolrunner II (XC2C64), still lacking of proper resets for genlocking and such, but will be happy to share it when it is a bit more elaborated.
I really like the simulation environment of FPGAs, being able to see what’s happening at each clock cycle is super handy for testing code, I was trying to do a similar thing with an arduino but had to compile and hook it up to a scope at each modification so I could check what was happening on a precise line of the video signal, tried also through the serial monitor but it’s hard to check precise stuff imo.
Anyway, I might start a dedicated thread about video sync generation as it is a broad subject.
Else I’ve been documenting myself a lot on Y/C separation (basically composite to s-video), mainly cause having both luminance and chrominance separated from a composite signal makes it easier to process in analog (and also in digital of course, as it is required to convert the source signal to a RGB/YUV colorspace). I’ve tested a few simple analog filters, which didn’t worked super well, the main issue being that there is still some chrominance in luminance, so some monitors picks up colors even if the level is low, which doesn’t allow for a true b&w image. There were specialized digital comb filters ICs that were made when analog video was still prevalent in mass market, which were analog in/analog out, but most are obsolete and going for a digital solution, better look at what is still made.
From what I’ve found, the comb filters are now fully integrated in video decoder chips as the ADV ones from Analog Devices, which also take care of converting it to digital, which would mean that in order to have an analog YC output, it would require a video encoder too (and a micro or something to set parameters of both ICs in i2c).
@andrei_jay super interested in your video mixer project! Reading video decoder datasheets made me think about how feasible a DIY video mixer would be recently too, but yeah the whole syncing/processing signals in the digital realm is way over my head currently
Just checked the jetson nano, looks like it can sync the two video inputs, that’s really nice.
@palomakop I’ve watched your latest work involving ferrofluids, it’s brilliant, the control you have over it makes it look like it’s a living thing, super cool to see some “behind the scene” picture.
@cyberboy666 awesome work!! As we previously discussed, including some kind of stroke to raster scan conversion could be nice for those who don’t have an analog scope.
@autr eager to see more, I really like visual coding environment, I’m not super “code literate” so this helps a lot! That’s also what I liked with FPGAs, as it is possible to program them only using logic building blocks, started learning really basic vhdl in the end as I didn’t manage to make what I want with the schematic part, but this sure helps to understand bit more how it works.