Whats everyone working on these days?

Love this. How large is the water tank?

1 Like

Thanks! The tank is 12 litre. About 1 foot wide. I’ve used both smaller and larger tanks and liked this one best.

1 Like

It’s really beautiful, would love to see more of your work.

2 Likes

Thank you :slight_smile: On videos.scanlines of course!

I quite like your astronaut flying into the broken LCD.

3 Likes

v.nice!

1 Like

I just finished this

I put on scanlines but it has incorrect duration.

It is inspired by the many galactic cycles we are in the convergence of. So much solar activity right now - Eclipse tomorrow, and a full moon too!
I love the luminaries, I always track their progress : )

3 Likes

I just did a module re rack and am about to do a hardware rack mount re cable to solidify the design now that I’ve had it in for a while.

https://www.instagram.com/p/CpvL0vIOtMF/

this is what the system looked like when I first got it put back together.

I’ll post some pics when I get everything finished up.


the first video is from a TD patch I’ve been working on. The amount of lines needs to come down just a bit for some more separation. I’m super happy with the look. I was filming some dichroic glass pieces and thought they would look good through this process. It is wild to me to be able to do this kind of effect at 60fps in real time on the computer.


The second video is a workflow I’ve been doing with a suite of apps on the computer (this is only using one of them) I’m calling our collage deck. I take all manner of our recordings and piece them together into a moving collage. Because I’m using the computer desktop I can capture at 4k. I used ffmpeg for some mirror processing using the command below.

ffmpeg -i a.mov -filter_complex '[0:v]split=3[in2][in3][in4];[in2]hflip[out2];[in3]vflip[out3];[in4]vflip,hflip[out4]' -map '[out2]' ah.mov -map '[out3]' av.mov -map '[out4]' ahv.mov

This takes one input video and spits out 3 variations H mirror, V mirror, and H&V mirror. These three outputs along with the original input are used in the following step by combining them altogether into one 8k base. This is all happening with no scaling.

ffmpeg -i a.mov -i ah.mov -i av.mov -i ahv.mov -filter_complex "nullsrc=size=7680x4320 [base];[0:v] setpts=PTS-STARTPTS [upperleft];[1:v] setpts=PTS-STARTPTS [upperright];[2:v] setpts=PTS-STARTPTS [lowerleft];[3:v] setpts=PTS-STARTPTS [lowerright];[base][upperleft] overlay=shortest=1 [tmp1];[tmp1][upperright] overlay=shortest=1:x=3840 [tmp2];[tmp2][lowerleft] overlay=shortest=1:y=2160 [tmp3];[tmp3][lowerright] overlay=shortest=1:x=3840:y=2160" -c:v libx264 output2.mp4

this was just for fun to see if I could make some high resolution video art through simple means. I’ll use this same technique with downscaling from 4k to 1080 on the original capture and make mirrored 4k output instead.


In the third video I take the output from our system at 1080 and then via a webcam feed I open multiple copies of the same output via multiple copies of the same app. Then arrange and with some built in H&V mirroring I can make a 4k video using no upscaling on the desktop of my computer and run it in real time.

In this instance the LZX system the drives all 6 feeds at once is still live which means it can be manipulated in real time.

So far the most copies I’ve done at once is by running the system at 720 and using 16 outputs to make a 4k no scaling output.

we’ve been up to a bunch more than that but these are some of the most recent things.

5 Likes

heyo ~ branching out a bit with this new one! did a video installation for a local San Francisco “pop-up” restaurant over two nights! Two projectors on hdmi-ethernet lines to Resolume running clips I composite from footage I shot and edited… digging the way it came together; cheers yall… :slight_smile:

Here’s some time-lapsed 10x video of part of the install -



4 Likes

…recording of the livestream of the ambioSonics live session 146 with video artist Robin Sutcliffe ( @PAUL_TEMPLE ) in Bavaria, Munich, Glockenbachwerkstatt:

…the visuals were much more brilliant live - and as we reocrded them separately too i guess Robin will release some shorter clips later (after editing them)…

3 Likes

Thanks @fairplay it was a really great event. Music was really enjoyable I liked it a lot. Would definitely come again :slight_smile:

2 Likes

Channel 1

Just wanted to share a first trial video weve been working on. Rough start but would love feedback on what to work on!

5 Likes

would recommened trying to smooth the keying out if possible - is it happening in hardware or software?

1 Like

hardware but then majorly datamoshed…definitely trying to sort blending better!

1 Like

I spent a couple years with datamoshing as one of my big side projects using just music videos as input. Just trying to make as many basic datamosh videos as I can and exploring what kind of source makes for the best moshing. I started to get an eye for that and then switched up my process.
regular data moshing > predictive datamoshing

an example of our original datamoshing output

an example of our predictive datamoshing workflow

we just played with those two methods for a while. predictive datamoshing is a workflow that we came up and I haven’t seen elsewhere.

in the last couple months we came up with a new method we are thinking of calling temporal distortion datamoshing

here is an example of temporal distortion datamoshing

the goal with all this is to have gotten comfortable enough in the practice to include it with the video synthesis workflows. Feeling comfortably in that place at this point we did a recent music video project utilizing datamoshing specifically the temporal distortion workflow.

this one isn’t released yet as the artist wanted to add some things before and after the video that weren’t in our scope of work.

We have a playlist which is mostly unlisted on our youtube of 78 of our best datamoshing results from the past couple years. I go through it a couple times a year trying to weed out the less impactful ones as well as adding to it throughout the year. We stream that playlist on the LZX discord all day on December 31st. Maybe this year it will just be on the youtube directly. The goal is to get to 100 of these videos we are happy with and then do a stream talking about the process of actually making the first type of datamoshing.

9 Likes

I’ve only gotten through the first quarter so far but the video you posted is really rad. Lots of good ideas on display for sure.

I love a chunky datamosh key nothing wrong there!

I really like having different levels of keying available I’ve got hardware soft/hard keys as well as a nice BMD set of keys in the constellation HD and then a suite of different keying methods available on the computer. They all have their place. I don’t always reach for the cleanest keyer it really depends on the desired output.

one thing I really appreciate about the video is the amount of changes that are happening.

really great work!

1 Like

Thanks so much for sharing all your videos! Alot of great learning materials to work through. It helps alot as we keep going!

1 Like

Those are sick! What is datamoshing? Ive seen that effect a lot but didnt know that was the name.

Really excited about this one. Couldn’t have figured it out without Andrei’s awesome tutorials!

9 Likes

An old album of mine just got reissued (the original was a small DIY run, mostly sold person-to-peerson and directly to shops, and I spend a good chunk of time last week making goofy, low-resolution promos for the label that released it.

Recur playback through my current setup, captured 3-40 minutes of footage for each video and then cut the actual promos to the audio track in Reaper. Now that I’m getting my head around the wayvideo processing and compositing actually works in Reaper, I hardly even open “real” NLEs anymore for basic stuff since I’m so much faster with it from all the years of using it for audio that it offsets the weirdness of how it handles video. Once you start compositing more than two layers it gets counterintuitive and cumbersome in some ways, but for simpler stuff it’s a lot more powerful than I realized and I haven’t even started to really get into the dozens of user-added features that are available.

For these it was just simple cutting, all the visual stuff is hardware.

EDIT: all the compression artifacts are part of the source material, I just went with it.

6 Likes

@PAUL_TEMPLE had asked (also here on scanlines.xyz) earlier this year for events he could join and contribute his live visual art…

…somehow we managed to be on two events together already, and a third one might be possible…

…i made a few videos from those events (one in Munich, Germany, the other in Hannover, Germany), some i’ll link here - one is a livestream from June

and the others cutouts from a livestream end of july

…all live visuals: @PAUL_TEMPLE - https://www.youtube.com/channel/UCDupIM35ZnO9Y7mKARc4G9Q

3 Likes