OP-Z AI Stable Diffusion

TE just shot out an email about someone (likely hired by TE) using OP-Z to control AI generated imagery. Interesting.

Can’t see any way for us to do that though.


“OP-Z gets smarter”

I def took that as a new OP-Z :frowning: also I sure wish regular customers could utilize this! But I recognize AI generated imagery utilizes heavy resources, so it would prob be a paid for service if this was available for all of us.

cool stuff! In any case this won’t run on the OP-Z itself, but on the connected iOS device, probably inside the OP-Z app like Photomatic. OP-Z just provides the midi data.
I was thinking about something similar marrying up two of Keijiro’s Unity developments, namely the TE videolab and his Pix2Pix implementation, where you can draw outlines and the AI will fill in the rest in realtime. This looks like something similar replacing the hand drawn input with midi data via klak/videolab. I suspect midi data will be remapped to natural language phrases that will then feed Stable Diffusion, plus throwing in some effects based on midi. Looks very exciting!

but when? :dagger:

I thought there’s an update which will include this as a new mode between Unity and Photomatic.

great addition for sure.

1 Like

I took it as a new firmware update. Really don’t want a new OP-Z right now since I just bought a used one which hasn’t arrived yet hehe. OP-1 Field came out less than a year after I bought my first new OP-1 too. Which reminds me, gotta sell that.


Don’t think new firmware would be required, but an update to the OP-Z app (if this really runs inside the app, which it doesn’t look like at the moment).

I have a Mac Studio, the Ultra one… for most tasks the fastest Mac to date. Anyway when running Diffusion Bee locally, which is a port of Stable Diffusion, it certainly isn’t instant. At lower settings it maybe takes 5 seconds to render an image. So I don’t see how an iOS device with way less power could do it in real time. The midi messages must be triggering images / graphics made with prompts already, IMO.

1 Like

Agree - what I understand when looking closer at the video is that for every key/mode (as per OP-Z master track I assume) there is a predefined prompt. I suspect that for these the images are generated upfront and only the effects are realtime. Similar to Photomatic relying on pre-recorded videos and then just applying effects and changing between videos. The effects here look more sophisticated, though.

1 Like

Well, after seeing this here below I can actually imagine it being more real-time than I thought:

1 Like

Wow. Same.

Not sure Ive seen Mac’s run stable diffusion locally? mostly a PC thing with fairly beefy GPU card - could be nice to see it streaming from a web browser to keep things light

I have the base M1 Ultra studio, 20 core CPU and 48-core GPU. Runs it very well… but also I’m producing photorealistic images not just colours and patterns like this, so it’d likely be quicker doing this more basic imagery. DiffusionBee is the port I run since it’s native and nice and quick.

Nice ok, I just looked into it and seems like the M1 or M2 is the way to go for Mac’s. Thanks for explaining your process, I think if motion curves and realtime FX were added into the mix then you could have something rather dynamic for going along with the music🪩

1 Like

So can we actually get this?

Not at this stage. We don’t know if it’s just a concept pie in the sky OP-Z marketing thing, or something which may become a feature later.

1 Like

I haven’t heard if this is something that works in real time. I also use Diffusion Bee locally on my M1, and it’s pretty slow. I have been playing with Deforum Stable Diffusion animations, and generating images for to put in videolab. It’s been pretty fun so far. Here’s an example of some character animations I created.: