Skip to main content

CHROMOLA

Elevator Pitch

CHROMOLA

Product Description

CHROMOLA

CHROMOLA Feature Overview

CHROMOLA is a real-time audiovisual instrument that fuses live video, MIDI, and generative automation for musicians, VJs, and creative technologists. It transforms your webcam or video files into a dynamic control surface, letting you both generate and manipulate music and visuals in a single, unified environment.

Key Features:

Live Video & Media Integration: Instantly switch between your webcam and video files as your visual source. Assign up to 28 media files (videos, images, GIFs) to MIDI notes 100–127, triggering them live from your MIDI controller—just like launching audio samples, but with visuals.

Generative Automation (KNOBS Tab): The app analyzes your processed video in real time, breaking down the grayscale and RGB histograms into 16 “knobs.” Each knob sends out a continuous MIDI CC stream (CC 1–16), allowing your visuals to modulate synths, effects, or DAW parameters organically.

Color Detection & MIDI Note Output (COLORS Tab): Define up to 7 custom color ranges using an intuitive HSV selector. When a tracked color appears in your video, CHROMOLA sends a MIDI Note On; when it disappears, it sends Note Off. Play melodies and chords with objects and colors in your environment.

Visual Effects Engine (VIDFX Tab): Stack up to 4 video effects from a deep library (swirl, trails, RGB split, and more). Map any effect parameter to any MIDI CC, use “Learn” mode for instant mapping, and “LockMIDI” to make a parameter follow your hardware knob or fader. Effects can be triggered by MIDI notes for live performance.

Media/Webcam Blending & Compositing: Blend your live camera with triggered media files using a smooth opacity slider—also MIDI-mappable. Effects can “tear holes” in your video, revealing the media layer underneath for advanced VJ-style visuals.

Professional Workflow: Save your entire setup—color settings, MIDI mappings, media assignments, and more—to a single file. Reload instantly for gigs or studio sessions. Easily select and switch between multiple MIDI devices for both input and output.

Virtual Webcam Output: Stream your processed visuals directly into OBS, Zoom, or any app that accepts a webcam input.

Performance Shortcuts & Responsive UI: Instantly clear all effects with the spacebar, switch back to webcam with the backtick (`), or print a full MIDI mapping report with ‘m’. All heavy processing runs in a background thread for smooth, lag-free performance.

Example Workflows:

Use your MIDI keyboard to trigger both sounds and visuals, with every gesture mirrored in both domains.
Let the environment (colors, movement) drive both the music and the visuals, creating an ever-evolving, hands-off experience.
Use the KNOBS tab to generate wild, organic MIDI CC data for modulating synths, samplers, or effects in your DAW.
CHROMOLA blurs the line between instrument and canvas—making your sound and vision truly interactive.


How It’s Innovative

CHROMOLA is innovative because it unifies real-time video processing, MIDI control, and generative automation into a single, deeply interactive instrument—bridging the worlds of music and visuals in ways few tools can.

Unlike traditional VJ or MIDI controller software, CHROMOLA creates a bidirectional feedback loop:

Visuals can play music: The app analyzes live video or media files, tracking up to seven user-defined colors. When a color appears, it triggers MIDI notes—so you can literally “play” a synth with objects, lights, or movement in your environment.
Music can sculpt visuals: Every video effect parameter (swirl, trails, RGB split, etc.) can be mapped to any MIDI CC, with instant “learn” and “lock” modes. This means your hardware knobs, faders, or DAW automation can morph the video in real time, making the visuals as expressive as the music.
The KNOBS tab is a breakthrough in generative automation: it analyzes the processed video’s color and brightness histograms, turning them into 16 continuous MIDI CC streams. These “visual LFOs” let your video content organically modulate synths, effects, or lighting—no scripting or manual programming required.

CHROMOLA’s media integration is also unique. You can assign video clips, images, or GIFs to MIDI notes and trigger them like visual samples. The compositing engine allows effects to “tear holes” in the live feed, revealing the triggered media underneath—enabling complex, layered visuals that respond to both MIDI and video input.

The app’s workflow is performance-ready:

All processing runs in background threads for smooth, lag-free operation.
You can save and reload your entire session (including MIDI mappings, color settings, and media assignments) with a single file.
Virtual webcam output lets you stream your generative visuals directly into OBS, Zoom, or any video app.
Example workflows:

A performer waves colored objects to trigger synth notes, while twisting MIDI knobs to morph the video in sync with the music.
A VJ assigns video clips to MIDI pads, blending them with live camera input and letting the video’s own “energy” modulate sound and effects.
An installation artist lets the environment’s colors and movement generate both visuals and sound, creating an evolving, hands-off audiovisual experience.
In short:
CHROMOLA is innovative because it’s not just a controller or a visualizer—it’s a true audiovisual instrument, where sound and vision are equal partners, each able to drive and transform the other in real time. This opens up new creative territory for musicians, VJs, and artists seeking deeper, more organic connections between what they see and what they hear.

See MIDI Innovation In Action

Most Inspiring Use Cases

Turning the act of making music in a DAW from a 1 dimensional activity into a multimedia experience. It really does inspire new sounds and songs where I wouldnt have found them before.

Expansion Plans

The goal will be to rewrite the application in C++, with VST compatibility, which will increase frame rate performance. I would also like to integrate MIDI 2.0 for features like MPE associated with note area position, and a greater range for CC signals which will cut down on audio and visual effects chatter. I would also like to drastically expand the video effects capabilities where the user can create their own effects and integrate them into the application. There are also tertiary avenues I would like to explore with the technology like games and assistive technologies.

Commercialization

I would like to continue development with a goal of a full C++ release, while servicing a BETA release of the current product.