VJ UNION

Discussion on: Harmonizing Pixels and Paint: The Artist's Journey in the Technological Renaissance

Collapse
bennoh profile image
bennoH. • Edited on

Thank you very much for this beautifully designed and discussion-stimulating post. Basically, I'm always for new things and tried out "Stable Diffusion" for pure image generation via Clipdropp on my mobile with a measly 6GB RAM. I only slightly edited the pictures in Snapseed and added a frame and text mark, and the ones with pretty girls in particular quickly went through the roof on my flickr account ( flickr.com/photos/bennoh/533418671... ) . But I have to say that it is more the creativity of the AI ​​itself that is determining, the ability to be influenced and detailed conscious generation is still too far away. I was often more than astonished by the results, but still quite pleased on many occasions. However, I had to choose the intended use of the result differently than I wanted. This is now slowly happening - intentional conscious control is not guaranteed, but this will certainly change with specially developed software. Luckily, I am also someone who likes to work things out intuitively and is more likely to look forward to surprises and continue working with them, which is only possible in my own free projects. This quickly becomes a problem when doing contract work. I also don't understand why you can't tell the AIs in a more simplified way what was unacceptable about a certain result and should be repeated in such and such a way, but with otherwise the same algorithms. For example, leave the exact face, hair and clothing but only change the background and pose. The current "diffusion to video" examples often end up with a completely different person or face, hair, etc. and I think this was usually not what was desired. So at the moment it's often something like a lottery game with sometimes better results and sometimes no results at all and even the no results at all are occasionally brilliant pictures that you still save. Still in its infancy, it's easy to see what AI's are coming up with when it comes to visuals. But something we forget is that these technologies themselves have long been much more widespread than we realize. Fine and inconspicuous in the background of our photo apps on our smartphones, e.g. to support the small lenses with sharpness and structures or exposure such as colors. So AI-based algorithms have long been standard in commercially used applications and devices. However, controlling and controlling such mathematical miracles based on our brains in a broader range of areas will certainly require a lot of research and development. You can certainly create cascades of abstractions like SleepleMonk, but to me such visuals quickly result in a repetition in such a way that the enjoyment of the sea quickly goes from being totally hype to completely boring. Things like Tool3 seem to me to make more sense to invest the time where you also have the necessary controls. Blender and its AI plugins still seem very interesting to me at the moment, where "StableDiffusion" is also on board again, but still has a somewhat more determinable effect.