This pack contains 88 VJ loops (93 GB)
The mysterious guts of AI. What happens to a neural network after it's been trained? GAN's have multiple hidden layers but we only see the final result. So I wanted to crack it open and experiment.
I was exploring the StyleGAN3 interactive visualization tool and was curious to see the internal representations of the FFHQ1024 model. Yet the tool didn't include a way to render out video. I started thinking through some hacky options but I really didn't want to do a screencap recording due to the lower quality and I also wanted perfectly smooth morphs between seeds without needing to manually animate the attributes by hand. I happened to be exploring some of the additions to StyleGAN3-fun repo when I saw that PDillis had a todo list posted and adding support for internal representation video exporting was included. So I opened a ticket to show interest and then became a Github Sponsor. Many thanks to PDillis!
Each of the internal nodes of the neural network has a few hundred channels within each layer. So I soloed a single layer, selected 3 specific channels, arbitrarily mapped those channels to RGB, and then rendered out a latent seed walk video. But I wanted to enable a bunch of happy accidents to happen while compositing in After Effects, so I did 2 series of exports:
First I rendered out 48 videos: 8 clips of each layer, each clip has 3 unique channels visualized, and all of the clips locked to a single seed sequence. This enabled me to have 6 layers visualized with 24 channels each, in total 144 unique channels to play with in AE. Which would prove to be very interesting to jam with since they each had the same core animation, yet the coloring and texture was unique for each clip and I could therefore layer them together in wild ways.
Then I rendered out another 48 videos: 8 clips of each layer, each clip has 3 unique channels visualized, and each of the clips were given a unique seed sequence. This enabled me to create 48 videos where each clip had a unique core animation and also have unique coloring and textures.
From there I did a bunch of compositing experiments in After Effects. First I used the ScaleUp plugin to increase the size of all the videos to 2048x2048, which was quite a large jump for considers the original SG3 layer resolutions were: 148x148, 276x276, 532x532, 1044x1044. For the single-seed-videos I combined the videos into different experiments using the soft light or hard light blend modes. Then I rendered out the videos so that I could do further compositing experiments without the render times becoming absurd. From there I did some color range cutouts with added glow. Explored the difference and subtract blend modes paired with PixelSorter and PixelEncoder so that only the added FX was visible. Also experimented with BFX Map Ramp to tweak the colors and get crazy with its cycle attribute.
I always love doing a displacement map experiment in Maya/Redshift. Normally I enjoy using a volume light, but this time I used a point light and it produced some interesting shadows when placed at the edge. I also ran with brute force global illumination since the noise didn't matter in this speedy context. AI secrets are surreal.