VJ UNION

Cover image for Magic Skool - VJ Loops Pack
ISOSCELES
ISOSCELES

Posted on

Magic Skool - VJ Loops Pack

Download Pack

This pack contains 125 VJ loops (21 GB)
https://www.patreon.com/posts/109276615

Behind the Scenes

Double double toil and trouble, VJs mix their visuals subtle! Years ago I saw a finger tutting performance and it's always stuck with me. So I've long wanted to collaborate with a performer and yet I've had trouble finding a collaborator. But then I stumbled across the gloving community and found some amazing performers. And that's when I came across the work of Kevin Cablay (aka Puppet), who is a prolific lightshow artist and gloving coach. So I reached out, pitched a collaboration, agreed on a rate, and we were off to the races. Due to the history of gloving, he mentioned that there's long been a disconnect between glovers and the EDM scene. So he's always dreamed of having his gloving performance up on the big screen and so this was a perfect meeting of minds. For the performance recordings my guidelines were simple: the dance vibe should feel wizardly witchcraft, wear a long sleeve black shirt, stand against a black background, face not visible, and zero lighting except the LED gloves. Kevin ran with this and produced these incredible performances. Couldn't have done this with him!

The original impetus was to create visuals of occult spells, magic circles, and sigils that feel like something is being conjured. So I experimented with Stable Diffusion but either the model isn't well trained on these types of visuals, or I couldn't nail down a precise text prompt. But then I wondered if maybe there are custom LoRAs available that could enable this and sure enough there were several awesome options over on CivitAI. And luckily each of these LoRAs output a black drawing on a white background. So I experimented with some various text prompts to augment the LoRAs to my needs and then rendered out 10,000 images for each text prompt. Then I had to manually look through all of the images and curate only the best ones.

Over the years I've experimented with various frame interpolation softwares, but when the inter-frame movements are too large then often the interpolation is stretched too far and it falls apart. So when I received an alert that Topaz Video AI had released a new slowmo model called "Aion", I was quite curious to try it out. So I took the curated frames that I had made, numbered them sequentially to look like a frame sequence, and then slowmo processed the footage. This did a much better job than I've experienced prior with other models and it’s opened up some interesting new experimental possibilities with stop motion footage.

Since I now had a collection of 40,000 images of sigils, I combined a selection of the SD LoRA images together and was curious to see if I could fine-tune StyleGAN2 with this dataset. SG2 really likes to lock in on the big picture patterns that it finds and ignore the unique details, which is why I have to heavily curate and organize the order of the seeds so as to hide the fact that it's overly generalized. Due to my limited GPU computing, it’s very tricky to determine the ideal value for the Gamma attribute so that the diversity of the dataset is retained during training. Alas the ways that AI tools can save time, it just uses up more time in other areas to address its shortcomings.

Then I got to thinking and was curious if anyone had created an app which generated visuals of magic circles. And to my amazement, I found exactly that! Magic Circle Generator is a nicely designed app and is a blast to explore. I was careful to follow the license terms and not distribute the raw animations that the app generates and so I modified the renders in multiple ways. I also found this wonderful Sigil Generator that creates unique sigil shapes.

From there I took all of the footage into After Effects and had lots of fun doing compositing experiments. Due to needing to treat the SD frames as if they were stop motion, I had to import each frame into AE as an individual image instead of the typical frame sequence. I don't think I've ever imported so many thousands of frames individually into AE and it was definitely moving a bit slower than usual. I had originally wanted to do a 3D particle simulation in Maya, but then I realized that I could rely on the “CC Particle Systems II” plugin in After Effects to create the effect that I was looking for. I didn't think that adding slitscan FX would work in this context but it ended up being stunning since the patterns layer up so nicely. I took a few of the videos and injected them into NestDrop and got some nice happy accidents. Don’t play with fire, unless it’s generative! Seeing as how the gloving performance videos are entirely lit based on the LED gloves, I did an interesting experiment where I used the “Time Difference” FX so that only fast motion was allowed to be visible, which effectively hid the LED hot spots when the gloving performance moved slow. I also did an alternate version where I applied the gloving performance video to itself as a luma mask and then offset the mask by 1 frame. Time illusions are bizarre! And from there adding some glow FX added that extra bit of magical charm.

And of course right when I thought I was done, I had another idea that ended up being quite intensive to render out through multiple stages of preprocessing. I wanted to make the LED hot spots of the gloves look like flames. And having recently figured out how to do feedback loops inside of AE, it seemed ripe for experimentation since it can pull off fluidic and gaseous appearances without needing to do any simulation. So I used the Time Blend FX and then applied Vector Blur FX within the feedback loop to make it feel like flames. Yet there was an issue that almost made me give up… You could see the individual frames within the feedback loop, which meant that there weren't enough time steps feeding the feedback loop, even at 60fps. This issue was an undesirable artifact and was ruining the overall feeling of fire. After some careful thinking I figured out a solution. So I took the footage into Topaz Video AI and applied x8 slowmo processing, which turned a 6 minute video into a 48 minute video. Then I brought that slowmo footage back into AE, cut out the hot spots to only be visible, applied the Time Blend FX & Vector Blur FX, and rendered it out at full length. And then I imported that footage back into AE, sped it up by x8, layered it on top of the original video, and tweaked the gradient color mapping to look like fire by using the BFX Map Ramp plugin. This technique allowed me to smooth out the artifact by doing x8 more processing between each frame of the original video. We live in the age of interpolation. Abracadabra! Alakazam! Avada Kedavra!

https://www.jasonfletcher.info/vjloops/

Discussion (0)