One use case I’ve thought of for producing live visuals of other performers is to drive a videopak using audio from a live performance (e.g not the OP-Z output), in much the same way as a lot of lights take a microphone input and react to amplitude levels in real-time.
Does anyone know if this is achievable with a videopak and OP-Z? E.g get the level of the OP-Z’s microphone (or any IO track input) to drive some aspect of a videopak’s visuals?
It can’t respond to live audio input but can be faked with sequencing triggers after you have recorded your audio. If live audio input visualization is what your after then check out the EYESY by critter and guitari or download touchdesigner.
love this idea! would a gate still be a trigger and not a visualized waveform? I guess you can map waveforms different frequencies to push mesh around in unity? This would be a fun experiment, curious on the lag time as well. Thanks for posting this!
thanks @poika. simple setup, its mic volume converted into cc value 0-127. but miRack has so many modules, its probably possible to twick it to respond to specific frequencies. lag time, depends on your phone processor, and your tolerance for lag. for a modulation pedal its unnoticeable. but for high frequency modulation (like if you want to transmit a person speech with midi cc) it may have lag of miss some frames/steps.