Positronic Recursion Studio
The Entropy & Sons Positronic Recursion Studio is the time stream's premier almost holomorphic kaleidoscoping fractalizer. It's a video synthesizer brought to you straight from the year 14,725 by a collection of futuristic time traveling AI's trying to fund their research into super weird sci-fi stuff. It is a massively parameterized dynamic pixel shader network running on embedded hardware, responding to an array of inputs, housed in a self contained unit which creates novel real time HD kaleidoscopic fractals in response to its inputs. It responds to MIDI signals via USB and DIN connectors, has 8 CV jacks, has a dedicated jack for extracting beats and other interesting features from audio rate signals, can accept webcam input over USB, or even HDMI signals with the included HDMI->USB converter, and has an onboard touchscreen and arrays of knobs and buttons to use for internal customization and control. It's basically a rectangle that you plug all of your signals into and which outputs visuals over HDMI . And not just any visuals, visuals that rocked the dance floor in the 15th millennium. These are like, cool visuals. Space wars were fought over these visuals. And it can be yours for the low low price of some amount of your current century's currency. We need money. Like lots of it. Like a whooole bunch. We need to optimize our Schwarzschild radiuses and we need to linearize our neutrino coefficients, like, again. Whose idea again was it to hire that freaking intern?? And have you seen how much even first order quasi-degenerate neutrino coefficients are going for in this economy? ****** Marketing gibberish aside, this is a passion project by a single developer that started in a dorm room almost 20 years ago. It's been under sporadic development since then until receiving an investment a few years ago to take the project commercial. The original idea was a novel algorithm for creating non-escape time fractals, which isn't really known to the general public even after all this time. The algorithm is inherently very dynamic & parameterizable. And graphics hardware has improved just a tad over the years, so here we are!
There are a handful of ways this project is innovative, largely with respect to the hardware, the software , and the device’s artistic output. Video synthesizer hardware has been around for a while, and while never being all that mainstream, has seen a bit of resurgence over the past few years mainly in the modular gear community. Software video synthesis however is a massive industry; commercial VJ software is big business. And as such, the artistic content capabilities of software video synthesis systems massively out competes what’s available on the hardware market. Most modern hardware video synthesizers have a bit of a ‘nostalgic’ vibe, which makes sense as the majority of them are analog and the standard inter-op format is NTSC or PAL, at 640x480 resolution. In a lot of ways software is generally outclassing hardware, and we’re hoping to have some impact on this. For starters, our device both outputs and consumes 1080p HDMI, critical for anything wanting a modern feel. It’s also based around a digital system running Linux, making it infinitely more compatible with the modern technology ecosystem. You can connect to it via bluetooth, or ethernet for updates/preset sharing/etc. It also has a handful of USB ports giving compatibility with basically all USB MIDI devices for free, and allowing interface with other USB devices such as webcams. And the digital nature of the platform is absolutely critical for our expansion plans, elaborated upon below. But the general idea is that software is trivial to update. When the hardware enabling the software is completed, the software can continue to grow, and this principle was designed into both the hardware & software of the device from the ground floor. The software is innovative on two fronts, the first being the general architecture of the system, the second being the major software *instrument* running on this architecture that is the primary focus of this project. We can’t get into the specifics, but the software is a general platform for making parameterized visual instruments, performantly, on embedded hardware. There isn’t really anything out there that can do this and satisfy all of our design requirements, so we had to build our own platform from scratch. Which we will hopefully open source somewhere down the road, as we think the creative coding and broader community could make good use of it. The major artistic focus of the project is the recursive algorithm that is the primary instrument of the device. This algorithm is innovative, as an algorithm, but most importantly it is innovative aesthetically. The algorithm itself is a highly parameterized real time recursive algorithm for creating non escape-time fractals. This is different from most common fractal algorithms such as the Mandlebrot, Julia, 3D/MandleBox, algorithms, and is much easier to compute. And as far as we are aware it is not used anywhere except in this project, at least in its full generality. Although the demo scene does seem to make use of simplified versions of it. This algorithm enables the artistic content that is the device’s major innovation & selling point. The algorithm is computationally simple and enables a vast range of customization, exposing an immense space of artistic content that really just doesn’t look like anything else. The instrument has >150 parameters, some of which are numerical values, but others of which change the structure of the computations executing on the hardware, enabling a massive space of content.