Lemonaide is a state of the art AI Company that creates MIDI and audio plugins that generate inspirational ideas (such as melodies and chord progressions in any key). They are partnered with BeatStars, an online marketplace for electronic music producers and beat makers, where they sell access to their Generative AI MIDI models.
In 2024, they partnered with a second company, Sauceware, to release a new plugin called Spawn with audio visualization and a substantially larger collection of sounds. The original Lemonaide app has a small monthly subscription fee, whereas Spawn comes with a one-time purchase fee.
Whether you’re stuck in a creative rut or looking to experiment with new styles, Lemonaide makes sure you never run out of ideas. They’ve achieved a high quality, human sound with rolling chord articulation and catchy singable melodies. The model generates 4 and 8 bar phrases in a single key, appealing to sample-based producers in search of a quick starting point.
Lemonaide’s Fairly Trained AI MIDI models
Lemonaide began with a home-brewed base model called Seeds, with four different moods to choose from. In 2024 they released a handful of fine-tuned AI MIDI models, called the Collab Club, in partnership with Grammy-winning producers and chart-topping artists:
Kato On The Track: Billboard-charting producer with credits for Joyner Lucas, E-40, and Ice Cube.
KXVI: Grammy-nominated R&B/Soul producer with credits for SZA, Future, and DJ Khaled.
DJ Pain 1: Multi-platinum Neo Soul/Hip Hop producer for Nipsey Hussle, 50 Cent, and Ludacris.
Mantra: Pop hitmaker for Rihanna, Dua Lipa, and Bad Bunny.
Each model is designed to reflect the nuances of its genre, giving you access to styles crafted by industry pros.The Collab Club models are royalty-free for selling beats and placements with fewer than 1,000,000 streams. For major placements, Lemonaide provides an easy clearing process to ensure your projects remain hassle-free.
Lemonaide is certified by Fairly Trained, a non-profit initiative certifying companies that use ethically sourced training material in their AI model datasets. This certification aims to protect artists from unauthorized use of their work, addressing concerns about AI-generated content’s origins and its impact on human creativity
This model incentivizes content creators by allowing them to generate income from their creative work while maintaining clear boundaries for when licensing terms come into play. It’s a form of ensuring creators are compensated if the AI-generated content is commercially successful.To learn more about this topic, check out the MIDI.ORG article on ethical AI MIDI software.
Built-in virtual instruments and the DAW bridge
Lemonaide’s original product includes a handful of built-in virtual instruments including space pads, electric keys, pain piano, billboard piano, and synth strings. You can audition MIDI seeds with any of those instruments before dragging them into your DAW. They also provide a DAW bridge to enable playback with virtual instruments from your personal collection.
Their latest product, Spawn, includes hundreds of curated instrument presets designed to work together seamlessly. Here’s a quick summary of what they offer:
Bass: Deep sub-bass, mid-bass, and plucked basslines for rhythmic foundation.
Keys & Piano: Versatile piano, electric keys, and organ sounds for harmonic richness.
Synth: Synth keys, leads, and pads for modern, dynamic soundscapes.
Strings & Mallet: Lush string layers, percussive mallet sounds, and steel drums for unique textures.
Brass & Woodwinds: Bold brass, airy flutes, and shimmering bells for melodic accents.
Guitar & Pluck: Acoustic and electric guitar tones, along with sharp plucks for rhythmic melodies.
Soundscapes: Atmospheric and ambient layers to create depth and atmosphere in your tracks.
Spawn’s prompt interface includes a variety of sonic qualities and effect presets as well. Choose from descriptive properties like aggressive, airy, ambient, analog, bright, clean, complex, deep, dirty, distorted, dry, evolving, ethnic, filtered, harsh, huge, lush, processed, punch, simple, spacey, sub, underwater, vinyl, and wobble.
Those prompts guide the MIDI generation, but your control over the music doesn’t end there. Spawn includes additional effect layers like reverb, delay, chorus, distortion, and flanger. Granular control over generative music is precisely what’s been missing from other state of the art text-to-music generators like Suno and Udio.
An interview with the Lemonaide Team
What inspires a group of independent musicians and software developers to go all in on an AI MIDI product like this? I wanted to understand their greatest challenges as well as their biggest wins. So we interviewed their co-founders Michael Jacobs and Anirudh Mani along with Senior Research Scientist Julian Lenz to learn more.
Ezra: What inspired you to start an AI MIDI company?
MJ: It actually all started in my career as a rapper. I fell in love with creating music at age 11 (a lot of my musical inspiration was created out of a lot of Trauma I dealt with as a kid). I uploaded several music videos to YouTube which caught pretty solid steam back in the day.
After spending countless hours making music, I also decided to get into technology out of the goal of simply helping my family escape financial poverty. I ended up going to college for Technology, and spent 5 years at Google learning more about Cloud Computing and AI.
After learning the impact / potential AI has, I decided it would be awesome to create a Hip-Hop EP that was Co-Produced by AI. And from there, the inspiration continued to snowball into realizing, it would be awesome to make helpful tools for musicians using the unique inspirational value AI can provide.
Ani: As MJ was playing with Magenta and other tools, and building our initial offering of “Lemonaid”, I was a Research Scientist at Amazon’s Alexa group working on speech audio related research problems during the day, and experimenting with AI MIDI models for music at night as a very serious hobby, primarily to build something interesting for my own music.
When MJ and I crossed paths, it was serendipitous. Personally, I never thought I’d start a company, but I realized that co-founding “Lemonaide” was the best way for me to express my passion and skills for pushing AI research forward when applied to music, something I also went to Grad school for at Carnegie Mellon.
Growing up in a household obsessed with Hindustani Classical music in India, and learning piano and production at a very early stage, I see myself as an artist first, and a researcher second. I believe this instilled and solidified in me the ethical principles that we now practice at Lemonaide everyday – always building keeping the Artist in the center.
Ezra: What have been some of the greatest challenges you’ve faced so far?
MJ: It always starts with the training data. Using Pre-trained MIDI models only got us so far, but we very quickly realized in order to build truly meaningful algorithms, we would need to ethically source high quality MIDI from real human musicians that care about their craft, in order for our AI models to generate things that seem truly useful to the musician.
Outside of the training data, it also has to do with building custom MIDI algorithms that have the ability to learn the differences and patterns within the training data that make the music what it is. These are things like truly capturing velocity, strumming, offset timing – the list goes on, this work is detailed in this paper we published this past year.
Julian: The single biggest challenge I see is understanding exactly how people would like to interact with ML MIDI systems. The ‘old’ system is, “here’s 20 pre-made MIDI files, now go make this into a song”. Deep learning opens up so many new possibilities, and we believe that most of them in the MIDI realm haven’t been explored yet.
From a birds-eye view, we see from the rise of LLM chatbots that people love interactive systems that can be personalized to their exact task and creative/professional style. So, what is the MIDI version of that? This challenge is both technical and creative; and I think there is an opportunity to really redefine how people interact with MIDI in the future.
Another more practical challenge is that of data quantity. We are really proud of being Fairly Trained, which means every piece of our training data is legally cleared. But from the ML side, this of course means that we are working with datasets much smaller than a typical modern AI company.
To put it bluntly, I don’t think companies like OpenAI, Suno or Anthropic could make their type of models if they had to account for all of the data. So this puts a really fun challenge on the deep learning side, where we have to use every trick in the bag since we can’t just rely on scale.
Finally, there is an open challenge of getting models that know just how to break the ‘right’ rules, musically speaking. Most MIDI models, from Magenta days up until more recent state of the art versions, are pretty diatonic and well-behaved. Of course you can under-train them, or push the temperature, so they just get really weird outputs. But musically speaking, there is that beautiful gray zone where just a few rules are broken – the place where musicians like Stravinsky, Frank Zappa and Thelonius Monk thrive. It’s a huge challenge but I think we are on the right path.
Ani: One of the earliest challenges we were facing was difficulty in striking the balance between a truly generalizable MIDI model versus a musically interesting MIDI model, as we had limited MIDI data. We took an ensemble of models approach to provide a rounded experience for our user during inference, and in parallel continued to collect ethically sourced MIDI data directly from some amazing artists, and were able to overcome this hurdle pretty soon after.
At some point in the last year we also realized that there was a need to increase the overall quality of our MIDI output by capturing more expressive details, which are especially important for a genre like hiphop where the swing matters a lot.
This led to our research led by Julian on introducing a new MIDI tokenization technique called PerTok which captures such granular details while reducing sequence length up to 59% and vocabulary size up to 95% for polyphonic, monophonic and rhythmic tasks.
Our paper (https://arxiv.org/abs/2410.02060) was also published at ISMIR this year, and this research work is integral to the quality of outputs that our users love from our products Seeds, Collab Club and Spawn.
Ezra: What’s the most rewarding part of running a MIDI company?
MJ: One of the coolest things we are so proud of is the Collab Club. Being able to partner with Grammy Nominated, Billboard producers, meet with them on a weekly basis for over a 6-month period – collect their data, train algorithms with their feedback, define a monthly revenue share business model, and then deploy that to consumers who are looking for inspirational tooling. This is by far one of my favorite videos of one of our artists using their own model and highlighting the journey.
Ani: Lemonaide is an AI company and MIDI is our first love. ‘Controllability’ in AI modeling for music is a widely discussed topic and we believe MIDI modeling is going to be a key part of that conversation moving forward.
As MJ mentioned, everyday we cross paths with people that we adore and look up to as artists ourselves, and to be able to build something for other artists and help them is the most rewarding feeling ever.
Collab Club is one such example, where we built AI MIDI models with artists in their style, and now they are the ones who get the biggest share of earnings from these models. Lemonaide will continue to grow and evolve, but something that remains a constant for us is safeguarding the interests of the Artist while navigating this new uncertain world.
Community and Support
Lemonaide fosters a thriving community of producers and artists through its Discord channel and blog resources, offering tutorials, insights, and a space for collaboration. Whether you’re troubleshooting or sharing your latest creation, the Lemonaide community is there to support you.
Check out the Lemonaide and Spawn websites to learn more.
I’ve always been excited about music and technology. My piano teacher set up a simple MIDI studio in the 90’s, with a Dell computer, MIDI keyboard, and a KORG Audio Gallery GM Sound Module, which at the time was many times better than the soft synth sounds coming from Windows’ MIDI playback. We used software like TRAX and Midisoft Studio, and the MIDI demo songs were incredible and inspiring. I was amazed at how much potential there was to create music on the computer, with 64 tracks playing together at once. That’s when I first got into MIDI, learning how note on/off, velocity, and controller signals could control the expression of my music.
I would later continue my passion of music and technology to learn about DAWs and software instruments, and study Composition and Technology in Music and Related Arts at Oberlin Conservatory, and then Music for the Screen at Columbia College Chicago. As an adult, I would continue to follow those interests in music and tech working as a software engineer, film composer, conductor, inventor, creative technologist, and artist.
In 2020, my friend Federico Tobon and I created a musical robotic sculpture, Four Muses, using a repurposed Rock Band keyboard with MIDI out to control four electromechanical musical sculptures to create a robotic band. That’s when I started to learn MIDI at a lower level, reading the MIDI note from the keyboard from an Arduino, packaging that up into a message and sending that with an NRF24L01 transmitter, reading that note with another NRF24L01 receiver, and then triggering a corresponding solenoid or motor to strike an instrument wirelessly. One of the instruments used motors spinning at different frequencies to translate to pitch instead. Using Arduino I also programmed several modes for the keyboard and LED matrix, such as a live interaction mode, a playback mode, a sequencer mode, and a teaching mode.
I travel a lot, and am often writing music on the road. I had a Yamaha QY70 in the 2000’s which I used to love tracking songs on. But I’ve always wanted a tiny MIDI keyboard for my laptop. Even portable keyboards like the Korg nanoKEY were too big for me to use with a laptop on a plane, and took up too much space in my luggage. I also wanted something super portable that I could run warmups on with my chorus, the Trans Chorus of Los Angeles, before gigs.
I started tinkering with the SeeedStudio Xiao, a tiny, quarter-sized microcontroller that is cheap and extremely powerful, arduino compatible, and able to handle HID (Human interface Device) emulation as well as MIDI over USB. I made a breadboard prototype based on my learnings from Four Muses, adding some simple Arduino logic for supporting octave functions (simply add or remove multiples of 12 to the current note), sustain (send a control change) and modulation (another control change). I open-sourced my code here:
In a manufacturer’s components website, I sorted through hundreds of buttons and switches for days, sorting them by size and by newton force to find the smallest and lightest tiny buttons.
I decided to make a credit-card sized daughterboard for the Xiao that would have 18 keys on it, octave buttons, sustain, and modulation functions. Since the buttons weren’t touch sensitive, I added more buttons for setting global velocity levels (P/MF/FF). I also learned how to multiplex inputs and outputs into rows and columns, giving me 25 inputs from 5 rows and 5 columns using only 10 I/O pins, and diodes to filter out ghost notes. I learned how to use EasyEDA (free PCB Design software), built my first schematic and PCB design, and ordered my first PCBs.
The first batch I got back was a failure. It turns out the diodes I picked had too big of a forward voltage drop, killing my signal flow. ChatGPT was very useful for this, helping me troubleshoot what I’d done wrong, and helping me understand datasheets better to pick the right type of diodes.
I ordered a second batch, and they worked! I had the manufacturer assemble the boards, and then I manually hand-soldered the Xiao microcontrollers onto them, and programmed them in Arduino. I now had a tiny, USB-C MIDI keyboard that I could take anywhere with me, and since it was class compliant, it would work with phones and tablets too.
I started selling them online, and there’s been a lot of enthusiasm for these little boards. I’ve continued to iterate with them too. The next version I designed, the MidiCard Plus, has 25 keys, which I could do by combining the 3 velocity buttons into one button with a toggle function to toggle between P/MF/FF. It also has larger, sturdier buttons.
I’m working on future versions of the MidiCard as well with multiple color options, and just using a bare SAM D21 chip instead of the full Xiao module. This will prevent me from having to hand-solder each board and will give them a slimmer profile. I’m also designing cases for the MidiCard, and am open to other suggestions. (Maybe wireless or MIDI 2.0 features!)
If you’re interested in purchasing a MidiCard, they can be found at:
The ethics of AI music became a heated topic at industry panels in 2024, sparking debates around the notion of “fair use”. AI music tech companies have admitted to training their models on copyright protected music, without a license or consent from rights holders in the RIAA.
Over ten thousand major figures from the industry, including Thom Yorke of Radiohead, signed a shared statement near the end of the year, expressing their belief that “unlicensed use of creative works for training generative AI is a major, unjust threat to the livelihoods of the people behind those works, and must not be permitted.”
During September 2024, Billboard broke a story about Michael Smith, a man accused of $10M in wire fraud. He published large quantities of algorithmically generated music and used bot farms to stream that audio to turn a profit. Billboard’s story stoked concerns that instant song generation will pollute DSPs and siphon revenue away from “real” artists and labels.
There has been little to no discussion of AI MIDI generation software or its ethical implications. Instant song generators appeal to a substantially larger market size and pose a more direct threat to DSPs. MIDI tools are generally considered too niche for a non-technical audience.
The ethical advantages of AI MIDI generation
There are several ethical advantages to generating MIDI files instead of raw audio.
First, MIDI’s small file size conserves energy during training, generation, and file storage. That means that it’s not only cheaper to operate, but may have a lower environmental impact.
Second, independent artists are partnering with AI MIDI companies to create fine-tuned models that replicate their style, and selling access to those models as a vector for passive income.
AI audio models are fine tuned with artists as well, but the major AI song generation companies are scraping audio from the internet or licensing stock music in bulk. They don’t partner with artists on fine-tunes, which means labels and rights holders will make the most money.
In this article, I’ll review a couple big AI music ethics stories from 2024 and celebrate a few MIDI generation companies that have been working hard to set up fair deals with artists.
RIAA: Narrowing AI music ethics to licensing and copyright
Debates over the ethics of AI music reached a boiling point in June 2024, in a historic lawsuit by the RIAA against Suno and Udio. Both companies scraped copyright-protected music from the internet and used that audio to train their own commercial AI song generators.
Suno andUdio currently grant their users unlimited commercial license for audio created on their platform. This means vocalists can create albums without musicians and producers. Media creatives can add music to their productions without sync licensing fees.
Labels are predictably upset by Suno and Udio’s commercial license clause, which they feel competes directly with their own sync libraries and threatens to erode their bottom line.
To be clear, it’s not that the music industry wants to put a stop to generative AI music. On the contrary, they want to train AI models on their own music and create a new revenue source.
UMG struck up a partnership with AI music generator Klay, announced October 2024. If Klay can compete with Suno and Udio, it will likely be regarded as the “ethical alternative” and set a standard for other major labels to follow.
Fairly Trained: A system of accountability for data licensing
The non-profit organizationFairly Trained and their founder Ed Newton Rex have put a spotlight on AI audio model training and the need for better licensing standards. They offer an affordable certification for audio companies that want to signal compliance with industry expectations.
Watch the discussion below to learn more about Fairly Trained:
AI MIDI companies with Fairly Trained certifications
At least two AI MIDI companies have been certified Fairly Trained:
Lemonaide Music is a state of the art AI MIDI and audio generation plugin. They partner with music producers to fine tune models on their MIDI stems. When users purchase a model from the app store, artists receive a 40% revenue share. In early November 2024, Lemonaide announced a new partnership with Spawn, bringing advanced sound design and color field visualization to the MIDI generation experience.
Soundful Music is a B2C music generation service that includes MIDI stems as part of their core product. They hire musicians to create sound templates and render variations of that content from a cloud service. Soundful is a web browser application.
Both of these companies have proven that they sourced their training data responsibly.
The environmental cost of AI music generation
I spoke to several machine learning experts who agreed that MIDI training, generation and storage should consume less energy than raw audio generation, by virtue of the file size alone.
There is no public data on energy consumption at top AI audio generation companies. What we do have are reports on the data centers where those operations are held. Journalists like Karen Hao have ramped up coverage of the data centers housing our generative model operations and demonstrated the impact they’re having on vulnerable populations.
Economists have suggested that the US will benefit from domestic energy production. They encourage the construction of miniature nuclear plants and data centers.
Big tech companies do have sustainability initiatives, but they focus primarily on carbon emission reduction. The depletion of freshwater resources has received less attention from the media, appears to be less tightly regulated, and may end up being the most important issue.
🚫 In May 2024, Microsoft’senvironmental sustainability report confirmed that they failed to replenish the water consumed by datacenter operations. Their AI services led to a 34% increase in water consumption against previous years.
Freshwater restoration efforts led to a 10% recovery of the 55,500 megaliters consumed. The 50k megaliter loss would be enough to fill 20,000 standard Olympic-size swimming pools.
🚫 Amazon Web Services (AWS) appears to be a major offender, but their water use is mostly private. They’ve made a commitment to become “water positive” by 2030, a distant goal post considering the growing rate of consumption.
According to UNESCO, 50% of the people on our planet suffer from extreme water scarcity for at least one month every year. Do we want our generative audio products contributing to that problem, when there might be a better alternative?
How DataMind reduced the impact of their AI music app
Professor Ben Cantil, founder ofDataMind Audio, is the perfect example of a founder who prioritized ethics during model training.
DataMind partners directly with artists to train fine-tuned models on their style. He offers a generous 50% revenue share and credits them directly on the company’s website.
Their brick and mortar headquarters are powered by solar energy. They formerly completed a government sponsored study that reduced the local GPU energy footprint by 40% over a two month period. Cantil has made a public commitment to use green GPU centers whenever they outsource model training.
His main product is a tone morphing plugin called The Combobulator. Watch a demo of the plugin below to see how it works:
Exploring AI MIDI software further
We’ve already covered some of the Fairly Trained AI MIDI generation companies. Outside that camp, you can also check out HookTheory’s state of the art AI MIDI generation feature Aria.
The AI MIDI startup Samplab has also released several free browser tools in 2024, though they specialize in audio to MIDI rather than generative music.
Delphos Music is a B2B AI MIDI modeling service that gives musicians the power to fine-tune MIDI models on their own audio stems. Their service is currently high touch and operated through a web browser, but they do have a DAW plugin in beta.
Staccato is building an AI MIDI browser app that can analyze and expand on MIDI content. I’ve also seen a private demo from the AI text-to-MIDI generation startup Muse that looked very promising.
Bookmark our AI MIDI generator article to follow along. We update the list a few times a year and keep it up to date.
GeoShred introduces a new paradigm for musical instruments, offering fluid expressiveness through a performance surface featuring the innovative “Almost Magic” pitch rounding. This cutting-edge software combines a unique performance interface with physics-based models of effects and musical instruments, creating a powerful tool for musicians. Originally designed for iOS devices, GeoShred is now available as an AUv3 plug-in for desktop DAWs, expanding its reach and integration into professional music production workflows.
GeoShred Studio, an AUv3 plug-in, runs seamlessly on macOS devices. Paired with GeoShredConnect, musicians can establish a MIDI/MPE connection between their iOS device running GeoShred and GeoShred Studio, enabling them to incorporate GeoShred’s expressive multi-dimensional control into their desktop production setup. This connection allows users to perform and record tracks from their iOS device as MIDI/MPE, which can be further refined and edited in the production process.
iCloud integration ensures that preset edits are synchronized between the iOS and macOS versions of GeoShred. For example, a preset saved on the iOS version of GeoShred automatically syncs with GeoShred Studio, providing a seamless experience across platforms.
Equipped with a built-in guitar physical model and 22 modeled effects, GeoShred Studio offers an impressive array of sonic possibilities. For those looking to expand their musical palette, an additional 33 physically modeled instruments from around the globe are available as in-app purchases (IAPs). These instruments range from guitars and bowed strings to woodwinds, brass, and traditional Indian and Chinese instruments.
GeoShred Studio is designed to be performed expressively using GeoShred’s isomorphic keyboard.
GeoShred Studio is also compatible with MPE controllers, conventional MIDI controllers, and even breath controllers, offering a wide range of performance options. GeoShred Studio is free to download, but core functionality requires the purchase of GeoShred Studio Essentials, which includes distinct instruments separate from those in the iOS/iPadOS app, and iOS/iPadOS purchases do not transfer.
Works with MacOS Catalina or greater.
GeoShred, unleash your musical potential!
We are offering a 25% discount on all iOS/iPadOS and MacOS products in celebration of GeoShred 7, valid until October 10, 2024. Pricing table at moforte.com/pricing
Before we get into Anthony’s presentation at NAMM 2024, I wanted to give a bit of insight about why what he did had such a personal impact on me. I learned synthesis on an Arp 2600!
I started college at Wesleyan University in 1970, the same year that Alvin Lucier , the well respected electronic music composer started teaching there. John Cage had been at Wesleyan only a few years before.
Wesleyan was (and still is) a great, small liberal arts school.
I was studying Jazz with Clifford Thorton, who was in Sun Ra’s Arkestra and Sam Rivers, who had played with Miles.
Wesleyan has an amazing world music program and I was also studying African Drumming with Abraham Konbena Adzenyah, who was both an Associate Professor and simultaneously studying for his GED High School diploma. I would occasionally jam with L. Shankar, the Indian violinist.
John McLaughlin was studying Vina at Wesleyan in the fall of 1970 and used the Wesleyan cafeteria to rehearse his new band , The Mahavishnu Orchestra. For several weeks in a row, I would hang out after lunch and listen for free as Billy Cobham, Jerry Goodman, Jan Hammer, Rick Laird, and McLaughlin rehearsed. McLaughlin and L Shankar would team up later in Shakti.
To say the music scene at Wesleyan at the time was eclectic is an incredible understatement.
Anyway, back to Alvin Lucier. I didn’t know what to expect when I showed up in early September, 1970 for that first class in Electronic Music 101, but it was more surprising then anything I could have imagined. Alvin Lucier introduced himself and it sounded like this. Ma, ma, ma ,ma My,… na, na na, na, name… is Alvin …La, la, lucier and I will ……ba,ba, ba Be your …..Tea, tea, teacher. At that time, Lucier had a horrific stutter and he had just the year before written his signature work ” I Am Sitting In A Room”.
The text spoken by Lucier describes the process of the work, concluding with a reference to his own stuttering:
I am sitting in a room different from the one you are in now. I am recording the sound of my speaking voice and I am going to play it back into the room again and again until the resonant frequencies of the room reinforce themselves so that any semblance of my speech, with perhaps the exception of rhythm, is destroyed. What you will hear, then, are the natural resonant frequencies of the room articulated by speech. I regard this activity not so much as a demonstration of a physical fact, but more as a way to smooth out any irregularities my speech might have.
Alvin Lucier
in October of 1970, I went to a performance of I Am Sitting In A Room at the Wesleyan coffee house. Musicologists often fail to mention Lucier’s stutter, but to me it was the essence of the piece. Lucier sat in middle of the coffee house with a microphone, two tape recorders and speakers positioned around the small room in quad. He started repeating the text of the piece over and over again, with each consonant causing him to stutter.
It was uncomfortable to listen to and watch. But the repetitive stutter was being fed back into the room and doubled by two tape recorders which were slightly out of sync. This created an amazing cascade of stuttered rhythms.
Then after about 10 minutes, Lucier hit a switch and the sound from the speakers stopped. What happened next was magical. He then said perfectly clearly and without any stutter “I am sitting in a room different from the one you are in now.” He then repeated that single phrase and with each repetition, his stutter started to come back.
Then he kicked in the speakers and the whole process started over again. He repeated that process three times over the course of about 40 minutes. You watched in real time as someone with a serious speech impediment used electronic art to fix it, but it couldn’t last, he would always fall back into the halting , uncomfortable pattern of stuttering.
It was both powerful and heartbreaking and one of the most courageous pieces of art I have ever witnessed.
At the Wesleyan Electronic Music Studio, I learned synthesis on two Arp 2600 and an Arp 2500 Sequencer set up in Quad. Students in Electronic Music classes could get the keys to the studio and I spent many nights in my 4 years at Wesleyan creating sounds until the wee hours of morning and then tearing them apart and starting over from scratch to make a new patch. It was there working with the Arp 2600 that I learned the sheer joy of making sounds with synthesizers.
Anthony’s passion for teaching synthesis brought all of that joy back.
The Lifetime Achievement Awards at April NAMM 2023
At the April NAMM show we gave out MIDI Association LifeTime Achievement Awards to the founding fathers of modern synthesise and music production including Alan Pearlman from ARP.
So when Dina Pearlman who runs the Arp Foundation and received the award in 2023 on her father’s behalf came to us at NAMM 2024 and asked for a favor, we couldn’t say no.
She had scheduled a performance by Anthony for the Arp Foundation booth which was only a 5 by 10 booth against the wall at the front of Hall A. We had a much larger booth and headphones for 50 guests.
So even though we had 23 sessions arranged already, we had to say yes and boy are we glad we did!
If you don’t know who Anthony is, he was one of the main people who brought synthesizers to Hollywood.
He was heavily involved with the Synclavier and its development and he and his partner, Brian Banks had notable credits on some of the first films to almost exclusively use synths including: WarGames (1984), Starman (1984), The Color Purple (1985), Stand by Me (1985), Planes, Trains and Automobiles (1987), Young Guns (1988) and Internal Affairs (1990).
Here is a 1979 poster promoting the Synner’s (that’s Anthony and his partner at the time Brian Banks) performances of classical pieces at LA County Museum of Natural History.
Anthony has also been the synthesist on many amazing records including Micheal Jackson’s Thriller (produced by Quincy Jones). There isa link at the bottom of the article to his Youtube page which has a bunch of great videos including his presentation at NAMM 2024 where he invited young people on stage and taught them how to get cool sounds out of the Arp 2600 in a matter of minutes.
His passion for synthesis brought back college memories of discovering the joys of analog modular synths for the first time guided by Alvin Lucier.
Anthony Maranelli’s Presentation of the ARP 2600 at The MIDI Association Booth NAMM 2024
ShowMIDI is a multi-platform GUI application to effortlessly visualize MIDI activity, filling a void in the available MIDI monitoring solutions.
Instead of wading through logs of MIDI messages to correlate relevant ones and identify what is happening, ShowMIDI visualizes the current activity and hides what you don’t care about anymore. It provides you with a real-time glanceable view of all MIDI activity on your computer.
When something happens that you need to analyze in detail, you can press the spacebar to pause the data and see a real-time static snapshot. Once you’re doing, press the spacebar again and ShowMIDI resumes with the latest activity.
This animation shows the difference between a traditional MIDI monitor on the left and ShowMIDI on the right:
Open-source and multi-platform
ShowMIDI is written in C++ and JUCE for macOS, Windows and Linux, an iOS version is in the works. You can find the source code in the GitHub repository.
Alongside the standalone application, ShowMIDI is also available as VST2, VST3, AUv2, AUv3, CLAP and LV2 plugins for DAWs and hosts that support MIDI effect plugins. This makes it possible to visualize MIDI activity for individual channels and to save these with your session.
Introduction and overview
Below is an introduction video that shows how the standalone version of ShowMIDI works. You get a glimpse of what the impetus for creating this tool was and how you can use it with multiple MIDI devices. Seeing the comparison between traditional MIDI monitor logs (including my ReceiveMIDI tool) and ShowMIDI’s visualization, clearly illustrates how the information becomes much easier to understand and consume.
Smart and getting smarter
ShowMIDI also analyzes the MIDI data and displays compound information, like RPN and NRPN messages that are constituted out of multiple CC messages. RPN 6, which is the MPE configuration message, is also detected and adds MPE modes to the channels that are part of an MPE zone.
This is just the beginning, additional visualizations, smart analysis and interaction modes will continue to be added. As MIDI 2.0 becomes more widely available, ShowMIDI will be able to switch its display mode to take those messages into account too.
The MIDI Association has enjoyed an ongoing partnership with Microsoft, collaborating to ensure that MIDI software and hardware play nicely with the Windows operating system. All of the major operating systems companies are represented equally in the MIDI Association, and participate in standards development, best practices, and more to help ensure the user experience is great for everyone.
As an AI music generator enthusiast, I’ve taken a keen interest in Microsoft Research (MSR) and their machine learning music branch, where experiments about music understanding and generation have been ongoing.
It’s important to note that this Microsoft Research team is based in Asia and enjoys the freedom to experiment without being bound to the product roadmaps of other divisions of Microsoft. That’s something unique to MSR, and gives them incredible flexibility to try almost anything. This means that their MIDI generation experiments are not necessarily an indication of Microsoft’s intention to compete in that space commercially.
That being said, Microsoft has integrated work from their research team in the past, adding derived features to Office, Windows, and more, so it’s not out of the question that these AI MIDI generation efforts might some day find their way into a Windows application, or they may simply remain a fun and interesting diversion for others to experiment with and learn from.
The Microsoft AI Music research team, operating under the name Muzic, started publishing papers in 2020 and have shared over fourteen projects since then. You can find their Github repository here.
The majority of Muzic’s machine learning efforts have been based on understanding and generating MIDI music, setting them apart from text-to-music audio generation services like Google’s MusicLM, Meta’s MusicGen, and OpenAI’s Jukebox.
On May 31st, Muzic published a research paper on their first ever text-to-midi application, MuseCoco. Trained on a reported 947,659 Standard MIDI files (a file format which includes MIDI performance information) across six open source datasets, developers found that it significantly outperformed the music generation capabilities of GPT-4 (source).
It makes sense that MuseCoco would outperform GPT-4, having trained specifically on musical attributes in a large MIDI training dataset. Details of the GPT-4 prompt techniques were included on figure 4 of the MuseCoco article, shown below. The developers requested output in ABC notation, a shorthand form of musical notation for computers.
Text to MIDI prompting with GPT-4
I have published my own experiments with GPT-4 music generation, including code snippets that produce MIDI compositions and will save the MIDI files locally using JS Node with the MidiWriter library. I also shared some thoughts about AutoGPT music generation, to explore how AI agents might self-correct and expand upon the short duration of GPT-4 MIDI output.
Readers who don’t have experience with programming can still explore MIDI generation with GPT-4 through a browser DAW called WavTool. The application includes a chatbot who understands basic instructions about MIDI and can translate text commands into MIDI data within the DAW. I speak regularly with their founder Sam Watkinson, and within the next months we anticipate some big improvements.
Unlike WavTool, there is currently no user interface for MuseCoco. As is common with research projects, users clone the repository locally and then use bash commands in the terminal to generate MIDI data. This can be done either on a dedicated Linux install, or on Windows through the Windows Subsystem for Linux (WSL). There are no publicly available videos of the service in action and no repository of MIDI output to review.
You can explore a non-technical summary of the full collection of Muzic research papers to learn more about their efforts to train machine learning models on MIDI data.
Although non-musicians often associate MIDI with .mid files, MIDI is much larger than just the Standard MIDI File format. It was originally designed as a way to communicate between two synthesizers from different manufacturers, with no computer involved. Musicians tend to use MIDI extensively for controlling and synchronizing everything from synthesizers, sequencers, lighting, and even drones. It is one of the few standards which has stood the test of time.
Today, there are different toolkits and APIs, USB, Bluetooth, and Networking transports, and the new MIDI 2.0 standard which expands upon what MIDI 1.0 has evolved to do since its introduction in 1983.
MIDI 2.0 updates for Windows in 2023
While conducting research for this article, I discovered the Windows music dev blog where it just so happens that the Chair of the Executive Board of the MIDI Association, Pete Brown, shares ongoing updates about Microsoft’s MIDI and music efforts. He is a Principal Software Engineer in Windows at Microsoft and is also the lead of the MIDI 2.0-focused Windows MIDI Services project.
I reached out to Pete directly and was able to glean the following insights.
Q: I understand Microsoft is working on MIDI updates for Windows. Can you share more information?
A: Thanks. Yes, we’re completely revamping the MIDI stack in Windows to support MIDI 2.0, but also add needed features to MIDI 1.0. It will ship with Windows, but we’ve taken a different approach this time, and it is all open source so other developers can watch the progress, submit pull requests, feature requests, and more. We’ve partnered with AMEI (the Japan equivalent of the MIDI Association) and AmeNote on the USB driver work. Our milestones and major features are all visible on our GitHub repo and the related GitHub project.
Q: What is exciting about MIDI 2.0?
A: There is a lot in MIDI 2.0 including new messages, profiles and properties, better discovery, etc., but let me zero in on one thing: MIDI 2.0 builds on the work many have done to extend MIDI for greater articulation over the past 40 years, extends it, and cleans it up, making it more easily used by applications, and with higher resolution and fidelity. Notes can have individual articulation and absolute pitch, control changes are no longer limited to 128 values (0-127), speed is no longer capped at the 1983 serial 31,250bps, and we’re no longer working with a stream of bytes, but instead with a packet format (the Universal MIDI Packet or UMP) that translates much better to other transports like network and BLE. It does all this while also making it easy for developers to migrate their MIDI 1.0 code, because the same MIDI 1.0 messages are still supported in the new UMP format.
At NAMM, the MIDI Association showcased a piano with the plugin software running in Logic under macOS. Musicians who came by and tried it out (the first public demonstration of MIDI 2.0, I should add) were amazed by how much finer the articulation was, and how enjoyable it was to play.
Q: When will this be out for customers?
A: At NAMM 2023, we (Microsoft) had a very early version of the USB MIDI 2.0 driver out on the show floor in the MIDI Association booth, demonstrating connectivity to MIDI 2.0 devices. We have hardware and software developers previewing bits today, with some official developer releases coming later this summer and fall. The first version of Windows MIDI Services for musicians will be out at the end of the year. That release will focus on the basics of MIDI 2.0. We’ll follow on with updates throughout 2024.
Q: What happens to all the MIDI 1.0 devices?
A: Microsoft, Apple, Linux (ALSA Project), and Google are all working together in the MIDI association to ensure that the adoption of MIDI 2.0 is as easy as possible for application and hardware developers, and musicians on our respective operating systems. Part of that is ensuring that MIDI 1.0 devices work seamlessly in this new MIDI 2.0 world.
On Windows, for the first release, class-compliant MIDI 1.0 devices will be visible to users of the new API and seamlessly integrated into that flow. After the first release is out and we’re satisfied with performance and stability, we’ll repoint the WinMM and WinRT MIDI 1.0 APIs (the APIs most apps use today) to the new service so they have access to the MIDI 2.0 devices in a MIDI 1.0 capacity, and also benefit from the multi-client features, virtual transports, and more. They won’t get MIDI 2.0 features like the additional resolution, but they will be up-leveled a bit, without breaking compatibility. When the MIDI Association members defined the MIDI 2.0 specification, we included rules for translating MIDI 2.0 protocol messages to and from MIDI 1.0 protocol messages, to ensure this works cleanly and preserves compatibility.
Over time, we’d expect new application development to use the new APIs to take advantage of all the new features in MIDI 2.0.
We are excited to announce that voting for the MIDI Innovation Awards 2023 is officially open. In the tradition of our past two successful years, we continue to celebrate innovation, creativity, and the fantastic array of talent in our MIDI community. As MIDI marks its 40th birthday this year, we’re thrilled to see how far we’ve come and anticipate the future with MIDI 2.0, which is set to inspire another revolution in music.
This year, you can discover and cast your votes for the most innovative MIDI-based projects across five categories until July 21st. The categories are:
Commercial Hardware Products
Commercial Software Products
Prototypes and non-commercial hardware products
Prototypes and non-commercial software products
Artistic/Visual Project or Installation
The MIDI Innovation Awards 2023, a joint effort by Music Hackspace, The MIDI Association, and NAMM, showcases over 70 innovative entries ranging from MIDI controllers to art installation. The three entries with the most votes will be shortlisted and presented to our stellar jury who will select each category winner.
We’re proud to announce new partnerships for 2023 with Sound On Sound, the world’s leading music technology magazine, and Music China, who will provide exhibition space to our winners at their Autumn 2023 trade fair in Shanghai. Our winners also receive significant support from The MIDI Association and Music Hackspace for the development of MIDI 2.0 prototypes, coverage in Sound On Sound, and an opportunity to exhibit at the 2023 NAMM Show.
The MIDI Innovation Awards entries will be evaluated by a distinguished jury representing various facets of the music industry. The esteemed judges include Jean-Michel Jarre, Nina Richards, Roger Linn, Michele Darling, Bian Liunian, and Pedro Eustache. They’ll be assessing entries based on innovation, inspiring and novel qualities, interoperability, and practical / commercial viability.
Mark these key dates in your calendar:
July 21st: Voting closes, jury deliberation starts
August 16th: Finalists announced
September 16th: Live show online – winners revealed
October: Finalists are invited to participate in the Sound On Sound SynthFest UK and Music China, including the User Choice Awards competition
Vote for your favorites now, and help us champion the most innovative MIDI designs of 2023!
For more details, visit the MIDI Innovation Awards page.
Together, let’s keep the music playing and the innovations flowing!
This article is to explain the benefits of MIDI 2.0 to people who use MIDI.
If you are a MIDI developer looking for the technical details about MIDI 2.0, go to this article updated to reflect the major updates published to the core MIDI 2.0 specs in June 2023.
The following movie explains the basics of MIDI 2.0 in simple language.
MIDI 2.0 Overview
Music is the universal language of human beings and MIDI is the universal digital language of music
Back in 1983, musical instrument companies that competed fiercely against one another nonetheless banded together to create a visionary specification—MIDI 1.0, the first universal Musical Instrument Digital Interface.
Nearly four decades on, it’s clear that MIDI was crafted so well that it has remained viable and relevant. Its ability to join computers, music, and the arts has become an essential part of live performance, recording, smartphones, and even stage lighting.
Now, MIDI 2.0 takes the specification even further, while retaining backward compatibility with the MIDI 1.0 gear and software already in use. MIDI 2.0 is the biggest advance in music technology in 4 decades. It offers many new features and improvements over MIDI 1.0, such as higher resolution, bidirectional communication, dynamic configuration, and enhanced expressiveness.
MIDI 2.0 Means Two-way MIDI Conversations
MIDI 1.0 messages went in one direction: from a transmitter to a receiver. MIDI 2.0 is bi-directional and changes MIDI from a monologue to a dialog. With the new MIDI-CI (Capability Inquiry) messages and UMP EndPoint Device Discovery Messages, MIDI 2.0 devices can talk to each other, and auto-configure themselves to work together.
They can also exchange information on functionality, which is key to backward compatibility—MIDI 2.0 gear can find out if a device doesn’t support MIDI 2.0, and then simply communicate using MIDI 1.0.
MIDI 2.0 Specs are mostly for MIDI developers, not MIDI users
If you are a MIDI user trying to read and make sense of many of the new MIDI 2.0 specs, MIDI 2.0 may seem really complicated.
Yes, it actually is more complicated because we have given hardware and software MIDI developers and operating system companies the ability to create bi-directional MIDI communications between devices and products.
MIDI 2.0 is much more like an API (application programming interface, a set of functions and procedures allowing the creation of applications that access the features or data of an operating system, application, or other service) than a simple one directional set of data messages like MIDI 1.0.
Just connect your MIDI gear exactly like you always have and then the operating systems, DAWs and MIDI applications take over and try to auto-configure themselves using MIDI 2.0.
If they can’t then they will work exactly like they do currently with MIDI 1.0.
If they do have mutual MIDI 2.0 features, then these auto-configuration mechanisms will work and set up your MIDI devices for you.
MIDI 2.0 works harder so you don’t have to.
Just Use MIDI
As you can see the only step that MIDI users really have to think about is Step 7 -Use MIDI
MIDI 2.0 expands MIDI to 256 Channels in 16 Groups so you will start to see applications and products that display Groups, but these are not so different than the 16 Ports in USB MIDI 1.0.
We have tried very hard to make it simple for MIDI users, but as any good developer will tell you – making it easy for users often makes more work for developers.
MIDI-CI Profile Configuration
At Music China 2023, there were a number of public presentations of recent MIDI specifications that the MIDI Association has been working on.
Joe Shang from Medeli who is on the MIDI Association Technical Standards board put it very well at the International MIDI Forum at Music China.
He said that with the recent updates published in June 2023, MIDI 2.0 had a strong skeleton, but now we need to put muscles on the bones. He also said that Profiles are the muscles we need to add.
He is right. This will be “The Year Of Profiles” for The MIDI Association.
We have now adopted 7 Profiles.
MIDI-CI Profile for General MIDI 2 (GM2 Function Block Profile)
MIDI-CI Profile for for General MIDI 2 Single Channel (GM2 Melody Channel)
MIDI-CI Profile for Drawbar Organ Single Channel
MIDI-CI Profile for Rotary Speaker Single Channel
MIDI-CI Profile for MPE (Multi Channel)
MIDI-CI Profile for Orchestral Articulation Single Channel
We also have completed the basic design of three more Profiles.
MIDI-CI Profile for Orchestral Articulation Single Channel
MIDI-CI Profile for Piano Single Channel
MIDI-CI Profile for Camera Control Single Channel
At Music China and at the meeting we had at the same time at Microsoft office in Redmond, MIDI Association and AMEI members were talking about the UDP Network transport specification that we are working on and the need to Profiles for all sorts of Effects ( Chorus, Reverb, Phaser, Distortion, etc.), Electronic Drums, Wind Controllers and DAW control.
The MIDI 2.0 overview defined a defined sets of rules for how a MIDI device sends or responds to a specific set of MIDI messages to achieve a specific purpose or suit a specific application.
Advanced MIDI users might be familiar with manually “mapping” all the controllers from one device to another device to make them talk to each other. Most MIDI users are familiar with MIDI Learn.
If 2 devices agree to use a common Profile, MIDI-CI Profile Configuration can auto-configure the mappings. Two devices learn what their common capabilities are and then can auto-configure themselves to respond correctly to a whole set of MIDI messages.
MIDI gear can now have Profiles that can dynamically configure a device for a particular use case. If a control surface queries a device with a “mixer” Profile, then the controls will map to faders, panpots, and other mixer parameters. But with a “drawbar organ” Profile, that same control surface can map its controls automatically to virtual drawbars and other keyboard parameters—or map to dimmers if the profile is a lighting controller. This saves setup time, improves workflow, and eliminates tedious manual programming.
Actually General MIDI was an example of what a Profile could do.
GM was a defined set of responses to set of MIDI messages. But GM was done before the advent of the bi-directional communication enabled by MIDI-CI.
So in the MIDI 1.0 world, you sent out a GM On message, but you never knew if the device on the other side could actually respond to the message. There was no dialog to establish a connection and negotiate capabilities.
But bi-directional commmunication allows for much better negotiation of capabilities (MIDI -CI stands for Capabilities Inquiry after all).
One of the important things about Profiles is that they can negotiate a set of features like the number of Channels a Profile wants to use. Some Profiles like the Piano Profile are Single Channel Profiles and get turned on and used on any single channel you want.
Let’s use the MPE Profile as an example. MPE works great, but it has no bi-directional communication for negotiation.
With MIDI 2.0 using a mechanism called the Profile Details Inquiry message, two products can agree that they want to be in MPE Mode, agree on the number of channels that both devices can support, the number of dimensions of control that both devices support (Pitch Bend, Channel Pressure and a third dimension of control) and even if both devices support high resolution bi-polar controllers. Bi-directional negotiation just makes things work better automatically.
Let’s consider MIDI pianos. Pianos have a lot of characteristics in common and we can control those characteristics by a common set of MIDI messages. MIDI messages used by all pianos include Note On/Off and Sustain Pedal.
But when we brought all the companies that made different kinds of piano products together (digital piano makers like Kawai, Korg and Roland, companies like Yamaha and Steinway that make MIDI controlled acoustic pianos and softsynth companies like Synthogy that makes Ivory), we realized that each company had different velocity and sustain pedal response curves.
We decided that if we all agreed on a Piano Profile with an industry standard velocity and pedal curve, it would greatly enhance interoperability.
Orchestral Articulation is another great example. There are plenty of great orchestral libraries, but each company uses different MIDI messages to switch articulations. Some companies use notes on the bottom of the keyboard and some use CC messages. So we came up with way to put the actual articulation messages right into the expanded fields in the MIDI 2.0 Note On message.
The following video has a demonstration of how Profile Configuration works.
The MIDI Association adopted the first Profile in 2022, the Default Control Change Mapping Profile.
Many MIDI devices are very flexible in configuration to allow a wide variety of interaction between devices in various applications. However, when 2 devices are configured differently, there can be a mismatch that reduces interoperability.
This Default Control Change Mapping Profile defines how devices can be set to a default state, aligned with core definitions of MIDI 1.0 and MIDI 2.0. In particular, devices with this Profile enabled have the assignment of Control Change message destinations/functions set to common, default definitions.
Because there were less than 128 controllers in MIDI 1.0, even the most commonly used could be reassigned to other functions.
Turning on this Profile sets commonly used controllers such as Volume (CC7), Pan (CC-10) , Sustain (CC64), Cutoff (CC 74), Attack (CC73), Decay (CC75), Release (CC72), Reverb Depth (CC91) to their intended assignment.
The video above included a very early prototype of the Drawbar Organ Profile and Rotary Speaker Profile.
We have just finished videos for Music China. Here are short videos for:
Property Exchange is a set of System Exclusive messages that devices can use discover, get, and set many properties of MIDI devices. The properties that can be exchanged include device configuration settings, a list of patches with names and other meta data, a list of controllers and their destinations, and much more.
Property Exchange can allow for devices to auto map controllers, choose programs by name, change state and also provide visual editors to DAW’s without any prior knowledge of the device or specially crafted software. This means that Devices could work on Windows, Mac, Linux, IOS and Web Browsers and may provide tighter integrations with DAW’s and hardware controllers.
Property Exchange uses JSON inside of the System Exclusive messages. JSON (JavaScript Object Notation) is a human readable format for exchanging data sets. The use of JSON expands MIDI with a whole new area of potential capabilities.
The MIDI Association has completed and published the following Property Exchange Resources.
Property_Exchange_Foundational_Resources
Property_Exchange_Mode_Resources
Property_Exchange_ProgramList_Resource
Property_Exchange_Channel_Resources
Property_Exchange_LocalOn_Resource
Property_Exchange_MaxSysex8Streams_Resource
Property_Exchange_Get_and_Set_Device_State
Property_Exchange_StateList
Property_Exchange_ExternalSync_Resource
Property_Exchange_Controller_Resources
One of the most interesting of these PE specifications is Get and Set Device State which allows for an Initiator to send or receive Device State, or in other words, to capture a snapshot which might be sent back to the Device at a later time.
The primary goal of this application of Property Exchange is to GET the current memory of a MIDI Device. This allows a Digital Audio Workstation (DAW) or other Initiator to store the State of a Responder Device between closing and opening of a project. Before a DAW closes a project, it performs the GET inquiry and the target Device sends a REPLY with all data necessary to restore the current State at a later time. When the DAW reopens a project, the target Device can be restored to its prior State by sending an Inquiry: Set Property Data Message.
Data included in each State is decided by the manufacturer but typically might include the following properties (not an exhaustive list):
Current Program
All Program Parameters
Mode: Single Patch, Multi, etc.
Current Active MIDI Channel(s)
Controller Mappings
Samples and other binary data
Effects
Output Assignments
Essentially this will allow hardware devices to have the same amount of recallability as soft synths when using a DAW.
There are a number of MIDI Association companies who are actively working on implementing this MIDI 2.0 Property Exchange Resource.
MIDI-CI Process Inquiry
Version 1.2 of MIDI-CI introduces a new category of MIDI-CI, Process Inquiry, which allows one device to discover the current values of supported MIDI Messages in another device including: System Messages, Channel Controller Messages and Note Data Messages
Here some use cases:
Query the current values of parameters which are settable by MIDI Controller messages.
Query to find out which Program is currently active
Query to find out the current song position of a sequence.
For Those Who Want To Go Deeper
In the previous version of this article, we provide some more technical details and we will retin them here for those who want to know more, but if you are satified with knowing what MIDI 2.0 can do for you, you can stop reading here.
MIDI Capability Inquiry (MIDI-CI) and UMP Discovery
To protect backwards compatibility in a MIDI environment with expanded features, devices need to confirm the capabilities of other connected devices. When 2 devices are connected to each other, they confirm each other’s capabilities before using expanded features. If both devices share support for the same expanded MIDI features they can agree to use those expanded MIDI features.
The additional capabilities that MIDI 2.0 brings to devices are enabled by MIDI-CI and by new UMP Device Discovery mechanisms.
New MIDI products that support MIDI-CI and UMP Discovery can be configured by devices communicating directly themselves. Users won’t have to spend as much time configuring the way products work together.
Both MIDI-CI and UMP Discovery share certain common features:
They separate older MIDI products from newer products with new capabilities and provides a mechanism for two MIDI devices to understand which new capabilities are supported.
They assume and require bidirectional communication. Once a bi-directional connection is established between devices, query and response messages define what capabilities each device has then negotiate or auto-configure to use those features that are common between the devices.
MIDI DATA FORMATS AND ADDRESSING
MIDI 1.0 BYTE STREAM DATA FORMAT
MIDI 1.0 originally defined a byte stream data format and a dedicated 5 pin DIN cable as the transport. When computers became part of the MIDI environment, various other transports were needed to carry the byte stream, including software connections between applications. What remained common at the heart of MIDI 1.0 was the byte stream data format.
The MIDI 1.0 Data Format defines the byte stream as a Status Byte followed by data bytes. Status bytes have the first bit set high. The number of data bytes is determined by the Status.
Addressing in MIDI 1.0 DATA FORMAT
The original MIDI 1.0 design had 16 channels. Back then synthesizers were analog synths with limited polyphony (4 to 6 Voices) that were only just starting to be controlled by microprocessors.
In MIDI 1.0 byte stream format, the value of the Status Byte of the message determines whether the message is a System Message or a Channel Voice Message. System Messages are addressed to the whole connection. Channel Voice Messages are addressed to any of 16 Channels.
Addressing in USB MIDI 1.0 DATA FORMAT
In 1999 when the USB MIDI 1.0 specification was adopted, USB added the concept of a multiple MIDI ports. You could have 16 ports each with its own 16 channels on a single USB connection.
The Universal MIDI Packet (UMP) Format
The Universal MIDI Packet (UMP) Format, introduced as part of MIDI 2.0, uses a packet-based data format instead of a byte stream. Packets can be 32 bits, 64 bits, 96 bits, or 128 bits in size.
This format, based on 32 bit words, is more friendly to modern processors and systems than the byte stream format of MIDI 1.0. It is well suited to transports and processing capabilities that are faster and more powerful than those when MIDI 1.0 was introduced in 1983.
More importantly, UMP can carry both MIDI 1.0 protocol and MIDI 2.0 protocol. It is called a Universal MIDI Packet because it handles both MIDI 1.0 and MIDI 2.0 and is planned to be used for all new transports defined by the MIDI Association including the already updated USB MIDI 2.0 specification and the Network Transport specification that we are currently working on.
Addressing in UMP FORMAT
The Universal MIDI Packet introduces an optional Group field for messages. Each Message Type is defined to be addressed with a Group or without a Group field (“Groupless”).
Channels, Groups and Groupless Messages in UMP
These mechanisms expand the addressing space beyond that of MIDI 1.0.
Groupless Messages are addressed to the whole connection. Other messages are addressed to a specific Group, either as a System message for that whole Group or to a specific Channel within that Group.
UMP continues this step by step expansion of MIDI capabilities while maintaining the ability to map back to MIDI products from 1983.
UMP carries 16 Groups of MIDI Messages, each Group containing an independent set of System Messages and 16 MIDI Channels. Therefore, a single connection using the Universal MIDI Packet carries up to 16 sets of System Messages and up to 256 Channels.
Each of the 16 Groups can carry either MIDI 1.0 Protocol or MIDI 2.0 Protocol. Therefore, a single connection can carry both protocols simultaneously. MIDI 1.0 Protocol and MIDI 2.0 Protocol messages cannot be mixed together within 1 Group.
Groups are slightly different than Ports, but for compatibility with legacy 5 PIN DIN, a single 16 channel Group in UMP can be easily mapped back to a 5 PIN DIN Port or to a Port in USB MIDI.
You will soon start to see applications which offer selection for Groups and Channels.
The newest specifications in June 2023 add the concept of Groupless Messages and Function Blocks.
Groupless Messages are used to discover details about a UMP Endpoint and its Function Blocks.
Some Groupless Messages are passed to operating systems and applications which use them to provide you with details of what functions exist in the MIDI products you have.
Now a MIDI Device can declare that Groups 1,2,3,and 4 are all used for a single function for 96 Channels (for example a mixer or a sequencer).
All of these decisions had to be made very carefully to ensure that everything would map back and work seamlessly with MIDI 1.0 products from 1983.
UMP Discovery
The UMP Format defines mechanisms for Devices to discover fundamental properties of other Devices to connect, communicate and address messages. Discoverable properties include:
1. Device Identifiers: Name, Manufacturer, Model, Version, and Product Instance Id (e.g. unique identifier).
2. Data Formats Supported: Version of UMP Format (necessary for expansion in the future), MIDI Protocols, and whether Jitter Reduction Timestamps can be used.
3. Device Topology: including which Groups are currently valid for transmitting and receiving messages and which Groups are available for MIDI-CI transactions.
These properties can be used for Devices to auto-configure through bidirectional transactions, thereby enabling the best connectivity between the Devices. These properties can also provide useful information to users for manual configuration.
UMP handles both MIDI 1.0 and MIDI 2.0 Protocols
A MIDI Protocol is the language of MIDI, or the set of messages that MIDI uses. Architectural concepts and semantics from MIDI 1.0 are the same in the MIDI 2.0 Protocol. Compatibility for translation to/from MIDI 1.0 Protocol is given high priority in the design of MIDI 2.0 Protocol.
In fact, Apple has used MIDI 2.0 as the core data format for Core MIDI with hi resolution 16 bit velocity and 32 bit controllers since the Monterey OS was released in 2021. So if you have an Apple computer or iOS device, you probably already have MIDI 2.0. in your operating system. Apple has taken care of detecting that when you plug in a MIDI 1.0 device, the Apple operating system translated MIDI 2.0 messages into MIDI 1.0 messages so you can just keep making music.
This seamless integration of MIDI 1.0 and MIDI 2.0 is the goal of the numerous implementations that have been released or are under development. Google has added MIDI 2.0 protocol to Android in Android 13, Analog Devices has added it to their A2B network. Open Source ALSA implementations for Linux and Microsoft Windows drivers/APIs are expected to be released later this year.
One of our main goals in the MIDI Association is to bring added possibilities to MIDI without breaking anything that already works and making sure that MIDI 1.0 devices work smoothly in a MIDI 2.0 environment.
The MIDI 1.0 Protocol and the MIDI 2.0 Protocol have many messages in common and messages that are identical in both protocols.
The MIDI 2.0 Protocol extends some MIDI 1.0 messages with higher resolution and new features. There are newly defined messages. Some can be used in both protocols and some are exclusive to the MIDI 2.0 Protocol.
New UMP messages allow one device to query what MIDI protocols another device supports and they can mutually agree to use a new protocol.
In some cases (the Apple example above is a good one), an operating system or an API might have additional means for discovering or selecting Protocols and JR Timestamps to fit the needs of a particular MIDI system.
MIDI 2.0 Protocol- Higher Resolution, More Controllers and Better Timing
The MIDI 2.0 Protocol uses the architecture of MIDI 1.0 Protocol to maintain backward compatibility and easy translation while offering expanded features.
Extends the data resolution for all Channel Voice Messages.
Makes some messages easier to use by aggregating combination messages into one atomic message.
Adds new properties for several Channel Voice Messages.
Adds several new Channel Voice Messages to provide increased Per-Note control and musical expression.
Adds New data messages include System Exclusive 8 and Mixed Data Set. The System Exclusive 8 message is very similar to MIDI 1.0 System Exclusive but with 8-bit data format. The Mixed Data Set Message is used to transfer large data sets, including non-MIDI data.
Keeps all System messages the same as in MIDI 1.0.
Expanded Resolution and Expanded Capabilities
This example of a MIDI 2.0 Protocol Note message shows the expansions beyond the MIDI 1.0 Protocol equivalent. The MIDI 2.0 Protocol Note On has higher resolution Velocity. The 2 new fields, Attribute Type and Attribute data field, provide space for additional data such as articulation or tuning details
Easier to Use: Registered Controllers (RPN) and Assignable Controllers (NRPN)
Creating and editing RPNs and NRPNs with MIDI 1.0 Protocol requires the use of compound messages. These can be confusing or difficult for both developers and users. MIDI 2.0 Protocol replaces RPN and NRPN compound messages with single messages. The new Registered Controllers and Assignable Controllers are much easier to use.
The MIDI 2.0 Protocol replaces RPN and NRPN with 16,384 Registered Controllers and 16,384 Assignable Controller that are as easy to use as Control Change messages.
Managing so many controllers might be cumbersome. Therefore, Registered Controllers are organized in 128 Banks, each Bank having 128 controllers. Assignable Controllers are also organized in 128 Banks, each Bank having 128 controllers.
Registered Controllers and Assignable Controllers support data values up to 32bits in resolution.
MIDI 2.0 Program Change Message
MIDI 2.0 Protocol combines the Program Change and Bank Select mechanism from MIDI 1.0 Protocol into one message. The MIDI 1.0 mechanism for selecting Banks and Programs requires sending three MIDI messages. MIDI 2.0 changes the mechanism by replicating the Banks Select and Program Change in one new MIDI 2.0 Program Change message. Banks and Programs in MIDI 2.0 translate directly to Banks and Programs in MIDI 1.0.
Built for the Future
MIDI 1.0 is not being replaced. Rather it is being extended and is expected to continue, well integrated with the new MIDI 2.0 environment. It is part of the Universal MIDI Packet, the fundamental MIDI data format.
In the meantime, MIDI 1.0 works really well. In fact, MIDI 2.0 is just more MIDI. As new features arrive on new instruments, they will work with existing devices and system. The same is true for the long list of other additions made to MIDI since 1983. MIDI 2.0 is just part of the evolution of MIDI that has gone on for 36 years. The step by step evolution continues.
Many MIDI devices will not need any of the new features of MIDI 2.0 in order to perform all their functions. Some devices will continue to use the MIDI 1.0 Protocol while using other extensions of MIDI 2.0, such as Profile Configuration, Property Exchange or Process Inquiry.
MIDI 2.0 is the result of a global, decade-long development effort.
Unlike MIDI 1.0, which was initially tied to a specific hardware implementation, a new Universal MIDI Packet format makes it easy to implement MIDI 2.0 on any digital transport. MIDI 2.0 already runs on USB, Analog Devices A2b Bus and we are working on an network transport spec.
To enable future applications that we can’t envision today, there’s ample space reserved for brand-new MIDI messages.
Further development of the MIDI specification, as well as safeguards to ensure future compatibility and growth, will continue to be managed by the MIDI Manufacturers Association working in close cooperation with the Association of Musical Electronics Industry (AMEI), the Japanese trade association that oversees the MIDI specification in Japan.
MIDI will continue to serve musicians, DJs, producers, educators, artists, and hobbyists—anyone who creates, performs, learns, and shares music and artistic works—in the decades to come.
MIDI 2.0 FAQs
We have been monitoring the comments on a number of websites and wanted to provide some FAQs about MIDI 2.0 as well as videos of some requested MIDI 2.0 features.
Will MIDI 2.0 devices need to use a new connector or cable?
No, MIDI 2.0 is a transport agnostic protocol.
Transport- To transfer or convey from one place to another
Agnostic- designed to be compatible with different devices
Protocol-a set of conventions governing the treatment and especially the formatting of data in an electronic communications system
That’s engineering speak for MIDI 2.0 is a set of messages and those messages are not tied to any particular cable or connector.
When MIDI first started it could only run over the classic 5 Pin DIN cable and the definition of that connector and how it was built was described in the MIDI 1.0 spec.
However soon the MIDI Manufacturers Association and Association of Music Electronic Industries defined how to run MIDI over many different cables and connectors.
So for many years, MIDI 1.0 has been a transport agnostic protocol..
MIDI 1.0 messages currently run over 5 PIN Din, serial ports, Tip Ring Sleeve 1/8″ cables, Firewire, Ethernet and USB transports.
Can MIDI 2.0 run over those different MIDI 1.0 transports now?
Yes, MIDI 2.0 products can use MIDI 1.0 protocol and even use 5 Pin DIN if they support the Automated Bi-Directional Communication of MIDI-CI and :
One or more Profiles controllable by MIDI-CI Profile Configuration messages.
Any Property Data exchange by MIDI-CI Property Exchange messages.
Any Process Inquiry exchange by MIDI-CI Process Inquiry messages.
However to run the Universal MIDI Packet and take advantage of MIDI 2.0 Voice Channel messages with expanded resolution, there needs to be new specifications written for each transport.
The new Universal Packet Format that will be common to all new transports defined by AMEI and The MIDI Associaton. The new Universal Packet contains both MIDI 1 .0 messages and MIDI 2.0 Voice Channel Messages plus some messages that can be used with both.
The most popular MIDI transport today is USB. The vast majority of MIDI products are connected to computers or hosts via USB.
The USB specification for MIDI 2.0 is the first transport specification completed, but we are working on a UMP Network Transport for Ethernet and Wireless Connectivity
Can MIDI 2.0 provide more reliable timing?
Yes. Products that support the new USB MIDI Version 2 UMP format can provide higher speed for better timing characteristics. More data can be sent between devices to greatly lessen the chances of data bottlenecks that might cause delays.
UMP format also provides optional “Jitter Reduction Timestamps”. These can be implemented for both MIDI 1.0 and MIDI 2.0 in UMP format.
With JR Timestamps, we can mark multiple Notes to play with identical timing. In fact, all MIDI messages can be tagged with precise timing information. This also applies to MIDI Clock messages which can gain more accurate timing.
Goals of JR Timestamps:
Capture a performance with accurate timing
Transmit MIDI message with accurate timing over a system that is subject to jitter
Does not depend on system-wide synchronization, master clock, or explicit clock synchronization between Sender and Receiver.
Note: There are two different sources of error for timing: Jitter (precision) and Latency (sync). The Jitter Reduction Timestamp mechanism only addresses the errors introduced by jitter. The problem of synchronization or time alignment across multiple devices in a system requires a measurement of latency. This is a complex problem and is not addressed by the JR Timestamping mechanism.
Also we have added Delta Time Stamps to the MIDI Clip File Specification.
Can MIDI 2.0 provide more resolution?
Yes, MIDI 1.0 Voice Channel messages are usually 7 bit (14 bit is possible by not so widely implemented because there are only 128 CC messages).
With MIDI 2.0 Voice Channel Messages velocity is 16 bit.
The 128 Control Change messages, 16,384 Registered Controllers, 16,384 Assignable Controllers, Poly and Channel Pressure, and Pitch Bend are all 32 bit resolution.
Can MIDI 2.0 make it easier to have microtonal control and different non-western scales?
Yes, MIDI 2.0 Voice Channel Messages allow Per Note precise control of the pitch of every note to better support non-western scales, arbitrary pitches, note retuning, dynamic pitch fluctuations or inflections, or to escape equal temperament when using the western 12 tone scale.
MIDI Association partner AudioCipher Technologies has just published Version 3.0 of their melody and chord progression generator plugin. Type in a word or phrase and AudioCipher will automatically generate MIDI files for any virtual instrument in your DAW. AudioCipher helps you overcome creative block with the first ever text-to-MIDI VST for music producers.
Chord generator plugins have been a hallmark of the MIDI effects landscape for years. Software like Captain Chords, Scaler 2, and ChordJam are some of the most popular in the niche. Catering to composers, these apps tend to feature music theory notation concepts like scale degrees and Roman numerals. They provide simple ways to apply chord inversions, sequencing and control the BPM. This lets users modify chord voicings and edit MIDI in the plugin before dragging it to a track.
AudioCipher offers similar controls over key signature, scale selection, chord selection, rhythm control, and chord/rhythm randomization. However, by removing in-app arrangement, users get a simplified interface that’s easier to understand and takes up less visual real estate in the DAW. Continue your songwriting workflow directly in the piano roll to perform the same actions that you would in a VST.
AudioCipher retails at $29.99 rather than the $49-99 price points of its competitors. When new versions are released, existing customers receive free software upgrades forever. Three versions have been published in the past two years.
Difficulty With Chord Progressions
Beginner musicians often have a hard time coming up with chord progressions. They lack the skills to experiment quickly on a synth or MIDI keyboard. Programming notes directly into the piano roll is a common workaround, but it’s time consuming, especially if you don’t know any music theory and are starting from scratch.
Intermediate musicians may understand theory and know how to create chords, but struggle with finding a good starting point or developing an original idea.
Common chord progressions are catchy but run the risk of sounding generic. Pounding out random chords without respect for the key signature is a recipe for disaster. Your audience wants to hear that sweet spot between familiarity and novelty.
Most popular music stays in a single key and leverages chord extensions to add color. The science of extending a chord is not too complicated, but it can take time to learn.
Advanced musicians know how to play outside the constraints of a key, using modulation to prepare different chords that delight the listener. But these advanced techniques do require knowledge and an understanding of how to break the rules. It’s also hard to teach old dogs new tricks, so while advanced musicians have a rich vocabulary, they are at risk of falling into the same musical patterns.
These are a few reasons that chord progression generators have become so popular among musicians and songwriters today.
AudioCipher’s Chord Progression Generator
Example of AudioCipher V3 generating chords and melody in Logic Pro X
Overthinking the creative process is a sure way to get frustrated and waste time in the DAW. AudioCipher was designed to disrupt ordinary creative workflows and introduce a new way of thinking about music. The first two versions of AudioCipher generated single-note MIDI patterns from words. Discovering new melodies, counter-melodies and basslines became easier than ever.
Version 3.0 continues the app’s evolution with an option to toggle between melody and chord generator modes. AudioCipher uses your word-to-melody cipher as a constant variable, building a chord upon each of the encrypted notes. Here’s an overview of the current features and how to use them to inspire new music.
AudioCipher V3.0 Features
Choose from 9 scales: The 7 traditional modes, harmonic minor, and the twelve-note chromatic scale. These include Major, Minor, Dorian, Phrygian, Lydian, Mixolydian, and Locrian.
Choose from six chord types including Add2, Add4, Triad, Add6, 7th chords, and 9ths.
Select the random chord feature to cycle through chord types. The root notes will stay the same (based on your cryptogram) but the chord types will change, while sticking to the notes in your chosen scale.
Control your rhythm output: Whole, Half, Quarter, Eighth, Sixteenth, and all triplet subdivisions.
Randomize your rhythm output: Each time you drag your word to virtual instrument, the rhythm will be randomized with common and triplet subdivisions between half note and 8th note duration.
Combine rhythm and chord randomization together to produce an endless variety of chord progressions based on a single word or phrase of your choice. Change the scale to continue experimenting.
Use playback controls on the standalone app to audition your text before committing. Drag the MIDI to your software instrument to produce unlimited variation and listen back from within your DAW.
The default preset is in C major with a triad chord type. Use the switch at the top of the app to move between melody and chord generator modes.
How to Write Chord Progressions and Melodies with AudioCipher
Get the creative juices flowing with this popular AudioCipher V3 technique. You’ll combine the personal meaning of your words with the power of constrained randomness. Discover new song ideas rapidly and fine-tune the MIDI output in your piano roll to make the song your own.
Choose a root and scale in AudioCipher
Switch to the Chord Generator option
Select “Random” from the chord generator dropdown menu
Turn on “Randomize Rhythm” if you want something bouncy or select a steady rhythm with the slider
Type a word into AudioCipher that has meaning to you (try the name of something you enjoy or desire)
Drag 5-10 MIDI clips to your software instrument track
Choose a chord progression from the batch and try to resist making any edits at first
Next we’ll create a melody to accompany your chord progression.
Keep the same root and scale settings
Switch to Melody Generator mode
Create a new software instrument track, preferably with a lead instrument or a bass
Turn on “Randomize Rhythm” if it was previously turned off
Drag 5-10 MIDI clips onto this new software instrument track
Move the melodies up or down an octave to find the right pitch range to contrast your chords
Select the best melody from the batch
Adjust MIDI in the Piano Roll
Once you’ve found a melody and chord progression that inspires you, proceed to edit the MIDI directly in your piano roll. Quantize your chords and melody in the piano roll, if the triplets feel too syncopated for your taste. You can use sound design to achieve the instrument timbre you’re looking for. Experiment with additional effects like adding strum and arpeggio to your chords to draw even more from your progressions.
With this initial seed concept in place, you can go on to develop the rest of the song using whatever techniques you’d like. Return to AudioCipher to generate new progressions and melodies in the same key signature. Reference the circle of fifths for ideas on how to update your key signature and still sound good. Play the chords and melody on a MIDI keyboard until you have ideas for the next section on your own. Use your DAW to build on your ideas until it becomes a full song.
Technical specs
AudioCipher is a 64-bit application that can be loaded either as a standalone or VST3 / Audio Component in your DAW of choice. Ableton, Logic Pro X, FL Studio, Reaper, Pro Tools, and Garageband have been tested and confirmed to work. Installers are available for both MacOS and Windows 10, with installer tutorials available on the website’s FAQ page.
A grassroots hub for innovative music software
Along with developing VSTs and audio sample packs, AudioCipher maintains an active blog that covers the most innovative trends in music software today. MIDI.org has published AudioCipher’s partnerships with AI music software developers like MuseTree and AI music video generator VKTRS.
AudioCipher’s recent articles dive into the cultural undercurrents of experimental music philosophy. One piece describes sci-fi author Philip K Dick’s concept of “synchronicity music”, exploring the role of musicians within simulation theory his VALIS trilogy. Another article outlines the rich backstory of Plantwave, a device that uses electrodes to turn plants into MIDI music.
The blog also advocates small, experimental software like Delay Lama, Riffusion and Text To Song, sharing tips about how to use and access each of them. Grassroots promotion of these tools brings awareness to the emerging technology and spurs those developers to continue improving their apps.
The Register posted an article today about Firefox supporting Web MIDI.
MIDI was created by a small group of American and Japanese synthesiser makers. Before it, you could hook synths, drum machines and sequences together, but only through analogue voltages and pulses. Making, recording and especially touring electronic music was messy, drifty and time-consuming. MIDI made all that plug-and-play, and in particular let $500 personal computers take on many of the roles of $500/day recording studios; you could play each line of a score into a sequencer program, edit it, copy it, loop it, and send it back out with other lines.
Home taping never killed music, but home MIDI democratised it. Big beat, rave, house, IDM, jungle, if you’ve shaken your booty to a big shiny beat any time in the last forty years, MIDI brought the funk.
It’s had a similar impact in every musical genre, including film and gaming music, and contemporary classical. Composers of all of the above depend on digital audio workstations, which marshall multiple tracks of synthesised and sampled music, virtual orchestras all defined by MIDI sequences. If you want humans to sing it or play it on instruments made of wood, brass, string and skins, send the MIDI file to a scoring program and print it out for the wetware API. Or send it out to e-ink displays, MIDI doesn’t care.
By now, it doesn’t much matter what genre you consider, MIDI is the ethernet of musical culture, its bridge into the digital.
The Register Post was inspired by this Tweet from the BBC Archives.
#OnThisDay 1984: Tomorrow’s World had instruments that sounded exactly like different instruments, thanks to the magic of microprocessors. pic.twitter.com/wbhm14WakD
GLASYS (Gil Assayas) was a winner of the MIDI Association’s 2022 Innovation Awards for artistic installations. He’s a keyboard player, composer, sound designer, and video content creator who currently performs live with Todd Rundgren’s solo band. The internet largely knows GLASYS for his viral MIDI art and chiptune music.
We spoke with Gil to learn more about how he makes music. I’ll share that interview with him below. First, let’s have a quick review of his newly released chiptune album.
MIDI Art that Tugs on my Heartchips
The latest record from GLASYS, Tugging On My Heartchips, debuted January 2023 and captures the nostalgia of early 8-bit game music perfectly, with classic sound patches that transport the listener back in time. The arrangements are true to the genre and some of the songs even have easter eggs to find.
Gil created MIDI art to inspire multiple songs on the album, elevating the album’s conceptual value into uncharted meta-musical territory. He even created music video animations of the MIDI notes in post production. On track two, The MIDI Skull Song, you can almost hear the swashbuckling pirates in search of buried treasure. Take a listen here:
The MIDI Gargoyle Song features an even more complex drawing, with chromatic lines to put any pianist’s hands in a pretzel. Once the picture is finished, Gil’s gargoyle comes to life in a funny animation and dances to the finished song. It’s the first time I’ve seen someone create animations from MIDI notes in the piano roll!
Heartchips delivers all the bubbly synths and 8-bit percussion you could want from a chiptune album. But with Gil, there’s more to the music than aesthetic bravado. Where other artists lean on retro sounds to make mid-grade music sound more interesting, GLASYS has mastered the composing and arrangement skills needed to evoke the spirit of early 90s games.
It can take several listens to focus on each of the album’s sonic elements. The mix and panning are impeccable. Gil rolls off some of the harsh overtones in the instrument’s waveform, to make it easier on our ears. But there’s something special happening in the arrangement, that we discussed in more detail during our interview.
Drawing from a classic 8-Bit technique
The playful acoustics of Heartchips mask Gil’s complex harmonic and rhythmic ideas like a coating of sugar.
Gil gives each instrument a clear sense of purpose and identity, bringing them together in a song that tells a story without words. To accomplish this, he uses techniques from early game music, back when composers had only 5 instruments channels to use.
In the 1980s and 90s, as portable gaming consoles became popular, there was a limit to the number of notes a microchip could store and play at once. Chords had to be hocketed, or broken up into separate notes, so that the other instrument channels could be used for lead melody, accompaniment and percussion.
As a result, the classic 8-bit composers avoided sustained chords unless the entire song was focused on that one instrument. Every instrument took on an almost melodic quality.
While Heartchips doesn’t limit itself to five instrument channels per song, it does align with the idea that harmony and chord progressions should be outlined rather than merely sustained as a chord.
When GLASYS outlines a chord as an arpeggio in the bass, you’ll often hear two or three countermelodies in the middle and upper registers. Each expresses a unique idea, completely different from the others, yet somehow working perfectly with them. That’s the magic of his art.
There are a few moments on the album when chords are sustained for a measure at a time, like on the tracks No School Today or Back to Reality. These instances where chords are used acquire an almost dramatic effect because it disrupts your expectations as a listener.
Overall, I found Tugging on my Heartchips to be a fun listening experience with lots of replay value.
What’s up with GLASYS in 2023?
In February 2023, GLASYS branched out from MIDI piano roll drawings to audio spectrograms. This new medium grants him the ability to draw images with more than MIDI blocks.
A spectrogram is a kind of 2D image. It’s a visual map of the sound waves in an audio file. It reads left to right, just like a piano roll. The X axis represents time and the Y axis represents frequency.
Some other artists (Aphex Twin, Dizasterpeace) have hidden images in spectrograms before, but those previous efforts only generated white noise. GLASYS has defied everyone’s expectations with spectrogram art created from his own voice and keyboards.
Here’s one of his latest videos, humming and whistling accompaniment to a piano arrangement in order to create a dragon. It may be the first time in history that something like this has been performed and recorded:
An Interview with GLASYS (Gil Assayas)
I’ve really enjoyed the boundary-defying MIDI art that comes from GLASYS, so I reached out on behalf of the MIDI Association to ask a few questions and learn more. Here’s that conversation.
E: You’ve released an album in January called Tugging On My Heartchips. Can you talk about what inspired you to write these songs and share any challenges that came up while creating it?
G: Sure. Game boy was a big part of my childhood. It was the only console we had, because in Israel it was more expensive and harder to get a hold of other systems.
The first games I had were Links Awakening, Donkey Kong, Battletoads, and Castlevania. I loved the music and what these composers could achieve with 4 tracks, using pulse wave and noise. Somehow they could still create these gorgeous melodies.
My experience growing up with those games was the main inspiration for this album. I never really explored these sounds in previous albums. I always went more for analog synths.
E: Your first GLASYS EP, The Pressure, came out in 2016 but your Youtube channel goes back almost a decade. Can you tell me a bit about the history of GLASYS?
G: When I first got started, I was playing in a band in Israel and every so often I would write a solo work that didn’t fit the band’s sound. So I created the GLASYS channel to record those ideas occasionally. After moving to the United States, I had a lot more time to focus on my own music and that’s when things started picking up.
E: Can you tell me more about your mixing process? Do you write, record, and mix everything yourself?
G: Yes, I write everything myself and record most of it in my home studio. Nowadays I mix everything myself, though in the past I’ve worked with some great mixing engineers such as Tony Lash (Dandy Warhols, Elliot Smith).
E: I think the playful tone of Heartchips will carry most listeners away. It’s easy to overlook the difficulty of creating a chiptune album like this, not to mention all the video work you do on social media to promote it. You’ve nailed the timbre of your instruments and compositional style.
G: Yeah, mixing chiptune can be trickier than it seems because of all the high end harmonic content. None of the waveforms are filtered and everything has overtones coming through. I found that a little bit of saturation on the square waves, pulse waves, and a little bitcrushing can smooth out the edges a bit. EQ can take out some of the harsh highs, and you can do some sidechaining. These are things you can’t do on a gameboy or NES.
E: How much of your time is spent composing versus mixing and designing your instruments?
G: Mixing takes a lot longer. Composition is the easy part. The heard part is making something cool enough to want to share. I can be a bit of a perfectionist. So I’ll do a mix, try to improve it, rinse and repeat ten revisions until I’m happy with it. That’s one of the reasons it can be better to do the mix myself, haha.
E: Before this interview, we were talking about aphantasia where people can’t visualize images but they can still dream in images. Do you ever dream in music?
G: Dreams are such an emotional experience. When you get a musical idea in your dreams, more often than not you forget it when you wake up. But when you do remember it, it’s very surreal. Actually, my first song ever was based on a purple hippo I saw in my dream. I was 5 years old, heard the melody, figured it out and wrote it down with my dad.
E: What inspired you to get into MIDI art?
G: Well, there were a couple of things. Back in 2017, an artist by the name of Savant created some amazing MIDI art – I believe he was the first to do it in a way that sounds musical. He inspired other artists to create MIDI art, such as Andrew Huang who created his famous MIDI Unicorn (which I performed live in one of my videos).
There was another piece in particular that blew me away, this Harry Potter logo MIDI art that uses themes from Harry Potter, masterfully created by composer Hana Shin. I don’t particularly care for Harry Potter, but I just found the concept and execution really inspiring and I thought it would be awesome to perform something like that live. In 2021, Jacob Collier did a few videos where he spelled out words in real time, which proved that it’s possible and motivated me to finally give it a shot.
My idea was to build on the MIDI art concept and draw things that were meaningful to me, such as video game logos and characters – and do it live, so I needed to write them in a way that would be possible to play with two hands. I actually just wanted to do it once or twice but it was the audience who motivated me to keep going. It got such a huge response, I’ve ended up doing nearly fifty of them. I’m now focusing on other things, but I might get back to MIDI art in the future
E: Do you have any advice for MIDI composers who struggle coming up with new ideas?
G: Sure, I do get writers block sometimes. As far as advice goes… I know how it goes where you keep rewriting something you’ve already created before. Everyone has their subconscious biases, things that they tend to go to without thinking. So even though they’re trying to do something new, they end up repeating themselves. It can be a struggle for sure.
If you find yourself sitting in front of your daw not knowing what to do, then don’t sit in front of your daw. Go outside and take a guitar with you and start jamming. Sometimes a change of environment, breaking the habit and getting out of the rut doing the same thing over and over can really help you.
Listen to something entirely different, then new ideas will come. A lot of the problem comes from listening to the same stuff or only listening to one genre of music. So everything you write starts to sounds like them.
Listen to music outside of the genres you like. For example, if you never listen to Cuban music, listen to it for a week. Some of it will creep into your subconscious, you might end up writing some indie rock song with cuban elements that’s awesome and would sound entirely new.
E: Are there any organizational tricks that you use to manage the sheer volume of musical ideas you come up with?
G: Yeah I used to have a lot of little ideas and save them in different folders, but it was too difficult to get back to things that I had written a year ago. Time goes by, you forget about how you felt when you wrote that thing, you feel detached from it.
If I decide to do something, I work on just one or two tracks until I’m done with them. I don’t record every idea I have either. I have to feel motivated enough to do something with it.
E: Do you have perfect pitch? Can you hear music in your head before playing it?
G: Definitely, yeah I can hear music in my head. I do have perfect pitch but it has declined a little bit as I get older.
E: What can we expect from GLASYS in 2023?
G: Lots of new music and videos – I’ve got many exciting ideas that I’m looking forward to sharing!
To learn more about Gil’s musical background, check out interviews with him here, here, and here. You can also visit the GLASYS website or check out his Youtube channel.
If you enjoyed this artist spotlight and want to read more about innovative musicians, software, and culture in 2023, check out the AudioCipher blog. We’ve recently covered Holly Herndon’s AI music podcast Interdependence, shared a new Japanese AI music plugin called Neutone, and promoted an 80-musician Songcamp project that created over 20,000 music NFTs in just six weeks. AudioCipher is a MIDI plugin that turns words into music within your DAW.
Hans Zimmer is one of the most famous and prolific film composer in the world.
He has composed music for over 150 films including blockbusters like The Lion King, Gladiator, The Last Samurai, the Pirates of the Caribbean, The Dark Knight, Inception, Interstellar and Dunkirk.
In a recent interview with Ben Rogerson from MusicRadar, this is what he said about MIDI.
MIDI is one of the most stable computer protocols ever written.
MIDI saved my life, I come from the days of the Roland MicroComposer, typing numbers, and dealing with Control Voltages. I was really happy when I managed to have eight tracks of sequencer going. From the word go, I thought MIDI was fabulous.
by Hans Zimmer for MusicRadar
To read the whole article, click on the link below
A new generation of AI MIDI software has emerged over the past 5 years. Google, OpenAI, and Spotify have each published a free MIDI application powered by machine learning and artificial intelligence.
The MIDI Association reported on innovations in this space previously. Google’s AI Duet, their Music Transformer, and Massive Technology’s AR Pianist all rely on MIDI to function properly. We’re beginning to see the emergence of browser and plugin applications linked to cloud services, running frameworks like PyTorch and TensorFlow.
In this article we’ll cover three important AI MIDI tools – Google Magenta Studio, OpenAI’s MuseNet, and Spotify’s Basic Pitch MIDI converter.
Google Magenta Studio
Google Magenta is a hub for music and artificial intelligence today. Anyone who uses a DAW and enjoys new plugins should check out the freeMagenta Studio suite. It includes five applications. Here’s a quick overview of how they work:
Continue – Continue lets users upload a MIDI file and leverage Magenta’s music transformer to extend the music with new sounds. Keep your temperature setting close to 1.0-1.2, so that your MIDI output sounds similar to the original input but with variations.
Drumify – Drumify creates grooves based on the MIDI file you upload. They recommend uploading a single instrumental melody at a time, to get the best results. For example, upload a bass line and it will try to produce a drum beat that compliments it, in MIDI format.
Generate – Maybe the closest tool in the collection to a ‘random note generator’, Generate uses a Variational Autoencoder (MusicVAE) and has trained on millions of melodies and rhythms within its dataset.
Groove – This nifty tool takes a MIDI drum track and uses Magenta to modify the rhythm slightly, giving it a more human feel. So if your music was overly quantized or had been performed sloppily, Groove could be a helpful tool.
Interpolate – This app asks you for two separate MIDI melody tracks. When you hit generate, Magenta composes a melody that bridges them together.
The Magenta team is also responsible for Tone Transfer, an application that transforms audio from one instrument to another. It’s not a MIDI tool, but you can use it in your DAW alongside Magenta Studio.
OpenAI MuseNet
MuseTree – Free Nodal AI Music Generator
OpenAI is a major player in the AI MIDI generator space. Their Dalle 2 web application took the world by storm this year, creating stunningly realistic artwork and photographs in any style. But what you might not know is that they’ve created two major music applications, MuseNet and Jukebox.
MuseNet – MuseNet is comparable to Google’s Continue, taking in MIDI files and generating new ones. But users can constrain the MIDI output to parameters like genre and artist, introducing a new layer of customization to the process.
MuseTree – If you’re going to experiment with MuseNet, I recommend using this open source project MuseTree instead of their demo website. It’s a better interface and you’ll be able to create better AI music workflows at scale.
Jukebox – Published roughly a year after MuseNet, Jukebox focuses on generating audio files based on a set of constraints like genre and artist. The output is strange, to say the least. It does kind of work, but in other ways it doesn’t. The application can also be tricky to operate, requiring a Google Colab account and some patience troubleshooting the code when it doesn’t run as expected.
Spotify is the third major contender in this AI music generator space. A decade ago, in 2013, they published a mobile-friendly music creation app called Soundtrap. So they’re no stranger to music production tools. As for machine learning, there’s already a publicly available Spotify AI toolset that powers their recommendation engine.
Basic Pitch is a free browser tool that lets you upload any song as an audio file and convert it into MIDI. Basic pitch leverages machine learning to analyze the audio and predict how it should be represented in MIDI. Prepare to do some cleanup, especially if there’s more than one instrument in the audio.
Spotify hasn’t published a MIDI generator like MuseNet or Magenta Studio’s Continue. But in some ways Basic Pitch is even more helpful, because it generates MIDI you can use right away, for a practical purpose. Learn your favorite music quickly!
The Future of AI MIDI Generators
The consumer applications we’ve mentioned, like Magenta Studio, MuseTree, and Basic Pitch, will give you a sense of their current capabilities and limitations. For example, Magenta Studio and MuseTree work best when they are fed special types of musical input, like arpeggios or pentatonic blues melodies.
Product demos often focus on the best use cases, but as you push these AI MIDI generators to their limits, the output becomes less coherent. That being said, there’s a clear precedent for future innovation and the race is on, amongst these big tech companies, to compete and innovate in the space.
Private companies, like AIVA and Soundful, are also offering AI music generation for licensing. Their user-friendly interfaces are built for social media content creators that want to license music at a lower cost. Users create an account, choose a genre, generate audio, and then download the original music for their projects.
Large digital content libraries have been acquiring AI music generator startups in recent years. Apple bought up a London company called AI Music in February 2022, while ShutterStock purchased Amper Sounds in 2020. This suggests a large upcoming shift in how licensed music is created and distributed.
At the periphery of these developments, we’re beginning to see robotics teams that have successfully integrated AI music generators into singing, instrument-playing, animatronic AI music robots like Shimon and Kuka. Built by the Center for Music Technology at Georgia Tech, Shimon has performed live with jazz groups and can improvise original solos thanks to the power of artificial intelligence.
Stay tuned for future articles, with updates on this evolving software and robotics ecosystem.
MIDI art is a fun, emerging technique that’s taking the internet by storm. This unusual approach to songwriting centers around creating 2-D art from colored MIDI notes in the piano roll of a Digital Audio Workstation, displayed to the listener for their amusement.
Not all MIDI art sounds good, but it usually expresses a visual concept. The emergence of MIDI art owes its success in large part to video content on Youtube and other social media channels. Live MIDI artists Glaysys even won second prize in the 2022 MIDI Innovation Awards.
To learn and watch some videos, check out this article on MIDI art at the Audiocipher site. Or if you prefer to jump to a specific topic, here are some anchors you can use:
MIDI Association contributor Walter Werzowa was featured on CNN today (Dec 26, 2021)
One of the best things about the MIDI Association is the great people we get to meet and associate with. After all they don’t call it an association for nothing. This year during May Is MIDI Month, we were putting together a panel on MIDI and music therapy and Executive Board member Kate Stone introduced us to Walter Werzowa.
So we were pleasantly surprised today when one of Walter’s latest projects was featured on Fareed Zakaria GPS show.
As I’ll detail on GPS in this week’s Next Big Idea, musicologists, composers & computer scientists have used AI to complete Beethoven’s unfinished 10th symphony.
We first got interested in Walter because of Healthtunes.org. HealthTunes® is an audio streaming service designed to improve one’s physical and mental health was founded by Walter in 2016. It uses Binural Beats.
Binaural beats and isochronic tones are embedded within our music (the low humming sound some may hear), which are two different methods used for brain wave entrainment. Binaural beats work by using two slightly different frequency tones sent to each ear. Isochronic tones use a single tone with a consistent beat being turned off and on regularly. Your body automatically reacts to both binaural beats and isochronic tones with a physiological response allowing one’s brain to reach a more desired mental state by influencing brain wave activity.
We soon learned that Walter had done many things in his career including memorable sonic branding themes from his company- musikvergnuegen. Vergnuegen could be translated as joy or fun and is used in the German word for amusement/theme park -vergnugungspark.
Almost everyone on the planet has heard his audio branding signatures. The Intel Boing and T mobile 5 note theme are all brilliant examples of simple mnemonics that could easily be described as ear worms.
By the way, the term ear worm comes from the German öhrwurm invented over 100 years ago to describe the experience of a song stuck in the brain.
T Mobile.mp3
Beethoven’s “finally finalized” 10th Symphony
But Walter’s latest project is perhaps his most impressive yet. He was part of a team of AI researchers and musicians that used AI to “finish” Beethoven’s Unfinished Symphony #10. How was MIDI involved? Like most AI music projects, the AI algorithm was trained using MIDI data of not only all of Beethoven’s completed symphonies, but all of his other works as well as works from Beethoven’s contemporaries that he would have listened to and been influenced by. You can watch the NBC’s Molly Hunter’s interview with Walter or just listen to the results of Walter’s work below.
Below is a link to the full Beethoven X symphony performance
Beethovens 10. Sinfonie am 9. Oktober ab 19 Uhr im kostenlosen Stream auf MagentaMusik 360. Das bisher unvollendete Stück von Ludwig van Beethoven wurde mithilfe einer AI (dt.: künstliche Intelligenz) nun zu Ende komponiert.
There’s a lot of excitement in the air – MIDI 2.0, VR/AR, spatial audio, flying taxis and Facebook’s own flight of fancy, Metaspace (it took me three goes to stop this being called Meatspace; perhaps aptly?).
Back in 1985 there was a similar air of expectation. MIDI had just been ratified by a quorum of MI and Pro Audio companies and I’d had a personal walk-through its immediate goals and capabilities from Dave Smith himself, riding high with Sequential Circuits in Silicon Valley. The initial goals might have been modest: connect two keyboards, play one and trigger the sound engine in both but even then ‘multi-timbralism’ was floated and the beginnings of how MIDI instruments could be connected to and controlled by a personal computer – a state of affairs that is not materially different almost 40 years later. It was entirely appropriate for Dave to call his first venture into softsynths, ‘Seer Systems’.
I’d just written my first Keyfax book and was also working as a keyboardist for John Miles, a supremely talented British pop star who’d had a string of hits in the UK, including the iconic Music, produced by Alan Parsons.
The first edition of Keyfax- the definitive guide to electronic keyboards
New Polydor signing Vitamin Z (‘zee’ for US readers but ‘zed’ for us Brits) wanted Alan to produce their debut album and Alan approached John to supplement the duo – singer Geoff Barradale and bass player Nick Lockwood – that Vitamin Z comprised. Duly, myself and our drummer, Barriemore Barlow of Jethro Tull fame, trouped down to Alan’s luxurious country house studio, The Grange, in the posh village of Benenden in Kent where Princess Anne had gone to school.
Julian Colbeck and Barriemore Barlow relax during the Vitamin Z sessions
The Grange was equally posh. Alan had a state of the start digital recording system based around the Sony PCM 3324, if memory serves. This was a freestanding system, not computer controlled, and nor did it have MIDI. At this time the worlds of ‘audio’ (i.e. regular recording) and the upstart MIDI, had nothing whatsoever to do with each other. It would be another four years before the world’s first Digital Audio Workstation would be introduced.
Steinberg Pro 24 – One of the first MIDI sequencers
MIDI (far from being as ubiquitous as it is now),was a keyboard player’s thing for those who had even noticed it at all, . I’d just picked up an Atari computer, which had MIDI built in, and had been testing out the Pro 24 ‘sequencer’ from a brand new German outfit called Steinberg. Alan, a geek – then and still now – was fascinated. There still wasn’t a huge amount of MIDI-connectable synths of the market.I’d had my trusty Roland Juno-60 converted to MIDI from Roland’s pre-MIDI DCB (Digital Communication Bus) and brought along a DX7 and, although my memory is a little hazy here, an early Ensoniq Mirage. But the cool thing was that we could record – and correct, change, quantize parts directly on the Atari. This was just revolutionary and mind-expanding. However, it wasn’t exactly what you’d call stable. Charlie Steinberg had given us his home number and it was quite possible that he and Manfred Rürup still worked out of their homes back then. But for many an evening we’d on the phone to Charlie, mainly trying to figure out synchronization issues. I remember on one call Charlie pronouncing what we’d certainly been experiencing and fearing for a while: “We do time differently,” he said, in his finest Hamburg accent. Ah, well that would certainly explain things
Julian Colbeck and Alan Parsons chat in 1988’s Getting The Most Of Of Home Recording – the precursor to their Art & Science Of Sound Recording video series and online course.
Things have changed a lot since those days in the 1980s of big hair and inexplicable even bigger shoulders. Alan continued with his amazing career as a producer and performing artist. Alan and I both moved to California.
I founded the company Keyfax NewMedia Inc. and in 1998 released the Phat Boy (yes, it was the 90’s) one of the first hardware MIDI controllers that could be used with a wide variety of synths and sofware.
Keyfax Phat-Boy MIDI Controller
But Alan and I continued our friendship and partnership and launched Alan Parsons’ Art and Science of Sound Recording. Because although the gear had changed and there were many more tools available to musicians and engineers, the core things that you needed to know to produce music hadn’t really changed at all.
Multi-platinum producer, engineer and artist Alan Parsons recently released his new single “All Our Yesterdays” and announces the launch of his new DVD and HD web video educational series entitled The Art and Science of Sound Recording, or “ASSR,” produced by Keyfax NewMedia Inc. The track was written and recorded during the making of ASSR, an in-depth educational series that highlights techniques in music production while giving a detailed overview of the complete audio recording process. The series is narrated by Billy Bob Thornton and will be available as a complete DVD set in July.
by LOS ANGELES, CA (PRWEB) JUNE 23, 2010
Special 50% Off Promo for the MIDI Association on the new ASSR On Line course
The knowledge that Alan has developed over his long and incredible career is available in a number of different mediums. There is videos, sessions files, books & DVDs, Live Training Events and now the newest incarnation On Line Courses on Teachable.
Legendary engineer and producer Alan Parsons began his career at Abbey Road, working with The Beatles on Let It Be and Abbey Road. Alan became one of the first ‘name’ engineers thanks to his seminal engineering work on Dark Side Of The Moon – still an audiophile’s delight almost 50 years later.
Alan is an early adopter of technology by nature: Looping, Quadrophonic, Ambisonics, MIDI, digital tape, sampling, DAWs, and Surround 5.1 with which he won the Best Immersive album GRAMMY in 2019. ASSR-Online is Alan’s Bible of Recording that looks at all aspects of music production from soundproofing a room to the equipment including monitors and microphones, all the processes including EQ, compression, reverbs, delays and more, and multiple recording situations such as recording vocals, drums, guitars. keyboards, a choir, beatmaking, and of course MIDI. Based on more than 11 hours of custom video, ASSR-Online is a complete course in recording, featuring more than 50 projects, tasks, and assignments with four raw multitracks to help you develop your recording skills to a fully professional level.
Thru November 15 get 50% off Alan Parsons’ ASSR-Online Recording and Music Production course through MIDI.org!
Go to the link below and add the code MIDI50 during checkout.
MIDI Controllers (Products, Physical Controls, and Messages)
Unfortunately the word controller is overburdened in the MIDI lexicon and probably the most overused word in the world of MIDI.
It can refer to three different things- products, physical controls and messages.
MIDI Controller=Product
People can say MIDI Controller and they mean a product like a IK MultiMedia iRig Keys I/O 25 Controller Keyboard.
They might say ” I’m using The Roland A88 MK2 as my MIDI Controller”.
MIDI Controller=Physical Control
But the word Controller is also used to refer to physical controls like a Modulation Wheel, a Pitch Bend wheel, a Sustain Pedal, or a Breath Controller (yes, there is that word again).
The word Controller is also used to describe the MIDI messages that are sent. So you could say “I’m sending Controller #74 to control Filter Cutoff’.
In fact, there are multiple types of MIDI messages that are sometimes referred to as “Controllers”:
MIDI 1.0 Control Change Messages
Channel Pressure (aftertouch)
Polyphonic Key Pressure (poly pressure)
Pitch Bend
Registered Parameter Numbers (RPNs) in MIDI 1.0 that equate to the 16,834 Registered Controllers in MIDI 2.0
Non-Registered Parameter Numbers (NRPNs) in MIDI 1.0 that equate to the 16,834 Assignable Controllers in MIDI 2.0
MIDI 2.0 Registered Per-Note Controllers
MIDI 2.0 Assignable Per-Note Controllers
To make things a bit more convoluted, the MIDI 1.0 specification contains certain MIDI Messages that are named after physical controls specifically-
Decimal Hex Function
1 0x01 Modulation Wheel or Lever
2 0x02 Breath Controller
4 0x04 Foot Controller
11 0x0B Expression Controller
64 0x40 Damper Pedal on/off (Sustain)
66 0x42 Sostenuto On/Off
67 0x43 Soft Pedal On/Off
But these are MIDI Control Change (CC) messages, not the actual physical controllers themselves.
However most products hardwire the Mod Wheel to CC#1 and set the factory default of Damper to be assigned to CC#64, etc.
Also on most MIDI products you can set your physical controller Mod Wheel to send different CC messages (for example Control Change #2 Breath Controller or Control Change #11 Expression).
MOD WHEEL is a physical controller that always generates a specific message cc001 Modulation Wheel. cc001 (Control Change) can be applied to most any function, it does not have a fixed function. It is most often used to apply Modulation depth to pitch (vibrato) but that must be assigned to the wheel on a per program basis.
by Yamaha Product Specialist Phil Clendennin ( AKA Bad Mister)
So a MIDI Controller has a MIDI Controller that sends a MIDI Controller! Or translated into a sentence that makes more sense-
A IK MultiMedia iRig Keys I/O 25 has a Mod Wheel that sends Control Change (CC) #11 Expression.
The important thing to remember.
The word MIDI controller can refer to three different things.
A type of product- The IK MultiMedia iRig Keys I/O 25 is a MIDI Controller
A physical control- The Mod Wheel on the IK MultiMedia iRig Keys I/O 25 is a MIDI Controller
A MIDI Control Change Message- The Mod Wheel on the IK MultiMedia iRig Keys I/O 25 is sending MIDI Controller #11 Expression
The MPE specification was adopted by The MIDI Association at the 2018 Winter NAMM show.
MPE is designed for MIDI Devices that allow the performer to vary the pitch and timbre of individual notes while playing polyphonically. In many of these MIDI Devices, pitch is expressed by lateral motion on a continuous playing surface, while individual timbre changes are expressed by varying pressure, or moving fingers towards and away from the player.
MPE specifies the MIDI messages used for these three dimensions of control — regardless of how a particular controller physically expresses them — and defines how to configure Devices to send and receive this “multidimensional control data” for maximum interoperability.
MIDI Pitch Bend and Control Change messages are Channel Messages, meaning they affect all notes assigned to that Channel. To apply Channel Messages to individual notes, an MPE controller assigns each note its own Channel.
Ableton added MPE support to Ableton 11 giving Ableton users the ability to be more musically expressive.
What Is MPE?
MPE (MIDI Polyphonic Expression) allows you to control multiple instrument parameters simultaneously depending on how you press the notes on your MPE-capable MIDI controller.
With MPE you can change these individual values for every note in real-time:
Pitch Bend (horizontal movement)
Slide (vertical movement)
Pressure
MPE MIDI messages are displayed once you record or draw a note, and you can edit them at any time.
Keyboards and other controllers are no longer limited to up/down motions and sometimes pressure. The MPE specification accommodates multiple performance gestures within a single note. How hard you strike a key or pad; how much you move your fingers side to side or up and down; how much pressure you apply after striking a key; how quickly or slowly you release from the surface: all of these gestures suddenly become musical with MPE. For example, instruments can translate side-to-side motion to provide vibrato like on an acoustic string instruments. A tiny amount of pressure on a key can “swell” the volume, or add brightness, to each part of a brass section.
With MPE you don’t just play a note—you play with a note. Because of this it is an artistic breakthrough as well as a technological one. It endows electronic instruments with greater potential for expressiveness.
by Craig Anderton, Author and MIDI Association President
Add more feeling to your music
Edit your recorded MPE MIDI Messages
Select the MIDI clip and click the Note Expression tab in the Clip View Editor.
You can view each parameter by clicking the Show/Hide lane buttons.
Similar to editing automation, you can move breakpoints, copy/paste/delete them, mark them, or use the draw mode.
Morph between chords and add bends by connecting the curve of a note with any subsequent note.
Massive Technologies releases major update to AR Pianist with new MIDI and Audio features
Massive Technologies (MT) newest AR Pianist update shows the unique power of combining MIDI Data with AI and VR technologies and is an incredibly engaging combination of new technologies.
They gave The MIDI Association the inside scoop on their new update to AR Pianist.
One of the major new features is the ability to import MIDI files to create virtual performances.
We’re excited to announce that a major update of AR Pianist is to be released on May 25th. We’ve been working on this update tirelessly for the past two years.
The update brings our AI technology to users’ hands, and gives them the ability to learn any song by letting the app listen to it once through the device microphone.
Using AI, the app will listen to the audio, extract notes being played, and then show you a virtual pianist playing that song for you with step by step instructions.
The app also uses machine learning and augmented reality to project the virtual avatar onto your real piano, letting you experience the performance interactively and from every angle.
Users can also record their piano performance using the microphone (or MIDI), and then watch their performance turn into a 3D / AR virtual concert. Users can share it as a video now, and to VR / AR headsets later this year.
The update also features songs and content by “The Piano Guys”, along with a licensed Yamaha “Neo” designer piano.
by Massive Technologies
A.I. Generates 3D Virtual Concerts from Sound:
“To train the AI, we brought professionally trained pianists to our labs in Helsinki, where they were asked to simply play the piano for hours. The AI observed their playing through special hardware and sensors, and throughout the process the pianist and we would check the AI’s results and give it feedback or corrections. We would then take that feedback and use it as the curriculum for the AI for our next session with the pianist. We repeated that process until the AI results closely matched the human playing technique and style.”
by Massive Technologies
Massive Technologies used MIDI Association Member Google’s Tensor Flow to train their AI model.
The technology’s main potential is music education, for piano teachers to create interactive virtual lessons for remote teaching—or for virtual piano concerts, and film or games creators who want to incorporate a super-realistic pianist in their scenes.
The key to it all is MIDI
If you look at the work being done by Google, Yamaha, Massive Technologies, The Pianos Guys and others in the AI space, MIDI is central to all of those efforts.
Why? Because MIDI is the Musical Instrument Digital Interface so to connect with AI and Machine Learning algorithms you usually have to convert Music into MIDI.
How Does AR Pianist work and what can you do with it?
AR Pianist combines a number of proprietary Massive Technologies together.
Multi pitch recognition
Massive Technologies’ in house ML models can estimate pitch and extract chords from audio streams, on the fly, in realtime.
This allows you to convert audio files of solo piano recordings into MIDI data that the AI engine can analyze. Of course you can also directly import MIDI data.
Object pose estimation
Their proprietary models can estimate the 3d position and orientation of real instruments from a single photograph.
This allows you to point your mobile device’s camera at your 88 note keyboard. The app can then map your keyboard into 3D space for use with Augmented Reality.
Motion synthesis and 3D Animation Pipeline
MT developed new machine learning algorithms that can synthesize novel and kinematically accurate 3d musical performance from raw audio files, for the use in education and AR / VR. Their tools can perform advanced full body and hand inverse kinematics to fit the same 3d musical performance to different avatars.
This is the part that almost seems like magic.
The app can take a MIDI or Audio performance (the Audio performance should be piano only), analyze it and generate musically correct avatar performances with the correct fingerings and hand positions including complex hand crossovers like those often used in classical or pop music (think the piano part from Bohemian Rhapsody).
Music notation rendering, in 3D
Massive Technologies has built a notation rendering engine, that can be used to display music scores in 3D and inside virtual environments, including AR / VR.
This allows you to see the notation for the performances. Because the data is essentially MIDI like data you can slow the tempo down, set the app to wait for you to play the right note before moving forward and other practice techniques that are widely used in MIDI applications.
A.I. Plays Rachmaninoff Playing Himself (First Person View):
An audio piano roll recording of Rachmaninoff himself playing his famous Prelude, from 1919, reconstructed into 3d animation by Massive Technologies AI.
A virtual camera was attached to the virtual avatar’s head, where its movement is being driven by the AI, simulating eye gaze and anticipation.
Massive Technologies is Fayez Salka, M.D Medical Doctor, Musician, Software Developer and 3D Artist and
Anas Wattar, BCom Graduate from McGill University, Software Developer and 3D Artist.
AR Pianist is available for on the Apple App Store and Google Play store
The app is free to download and offers in app purchases for libraries of songs. You can check out Jon Schimdt of the Piano Guys virtual demoing the AR Pianist at any Apple retail store.
A DAW’s MIDI Plug-Ins Can Provide Solutions to Common Problems
In a world obsessed with audio plug-ins, MIDI plug-ins may not seem sexy—but with MIDI’s continued vitality, they remain very useful problem solvers. For an introduction to MIDI plug-ins, please check out the article Why MIDI Effects Are Totally Cool: The Basics.
Although processing MIDI data has existed since at least the heyday of the Commodore-64, the modern MIDI plug-in debuted when Cakewalk introduced the MFX open specification for Windows MIDI plug-ins. Steinberg introduced a wrapper for MFX plug-ins, and also developed a cross-platform VST format. MIDI plug-ins run the gamut from helpful utilities that supplement a program like MOTU Digital Performer, to beat-twisting effects for Ableton Live. After Apple Logic Pro X added Audio Units-based MIDI plug-ins, interest continued to grow. Typically, MIDI plug-ins insert into MIDI tracks similarly to how audio plug-ins insert into audio tracks (Fig. 1).
Figure 1: In Cakewalk by BandLab, you can drag MIDI plug-ins from the browser into a MIDI track’s effects inserts.
Unfortunately most companies lock MIDI plug-ins to their own programs. Therefore this article takes a general approach that describes typical problems you can solve with MIDI plug-ins, but note that not all programs have plug-ins that provide these functions, nor do all hosts support MIDI plug-ins.
Instant Quantization for Faster Songwriting
MIDI plug-ins are generally real-time and non-destructive (some can work offline as well). If you’re writing a song and craft a great drum groove that suffers from shaky timing, don’t dig into the quantization menu and start editing—insert a MIDI quantizing plug-in, set it for eighth or 16th notes, and keep grooving. You can always do the “real” edits later.
Create Harmonies, Map Drums, and Do Arpeggiations
If your host has a Transpose MIDI plug-in, it might do a lot more than audio transposition plug-ins—like transpose by intervals or diatonically, change scales in the process of transposing from one key to another, or create custom transposition maps that can map notes to drums. The image above shows a variety of MIDI plug-ins; clockwise from upper left is the Digital Performer arpeggiator, Live arpeggiator, Cubase microtuner, Live randomizer, Cubase step sequencer, Live scale constrainer, Digital Performer Transposer, Cubase MIDI Echo.
Filter Data
You’re driving two instruments from a MIDI controller, and want one to respond to sustain but not the other…or filter out pitch bend before it gets to one of the instruments. Data filtering plug-ins can implement these applications, but many can also create splits and layers. If the plug-in can save presets, you can instantly call up oft-used functions (like remove aftertouch data).
Re-Map Controllers
Feed your footpedal through a re-mapping plug-in to control breath control parameters, mod wheel, volume, aftertouch, and the like. There may also be an option to thin or randomize control data, or map data to a custom curve.
Process MIDI Data Dynamically
You can compress, expand, and limit MIDI data (to low, high, or both values). For example, a plug-in could specify that all values under a certain value adopt that value, or compress velocity dynamics by a ratio, like 2:1. While you don’t need a MIDI plug-in to do these functions (you can usually scale velocities, then add or subtract a constant using traditional MIDI processing functions), a plug-in is more convenient.
MIDI Arpeggiation Plug-Ins
Although arpeggiation isn’t as front and center in today’s music as it was when Duran Duran was tearing up the charts, it’s still valid for background fills and ear candy. With MIDI plug-in arpeggiator options like multiple octaves, different patterns, and rhythmic sync, arpeggiation is well worth re-visiting if you haven’t done so lately. Arpeggiators can also produce interesting patterns when fed into percussion tracks.
“Humanize” MIDI Parts so They Sound Less Metronomic
“Humanizer” plug-ins usually randomize parameters, like start times and/or velocities, so the MIDI timing isn’t quite so rigid. Personally, I think they’re more accurately called “how many drinks did the player have” because musicians tend not to create totally random changes. But taking a cue from that, consider teaming humanization with an event filter. For example if you have a string of 16th note hi-hat triggers, use an event filter to increase velocities that fall on the first note of a beat, and perhaps add a slight increase to the third 16th note in each series of four. Then if you humanize velocity slightly, you’ll have a part that combines conscious change with an overlay of randomness.
Go Beyond Traditional Echo
Compared to audio echo, MIDI echo can be far more flexible. Fig. 2 shows, among other MIDI plug-ins, Cakewalk’s MIDI Echo plug-in.
Figure 2: Clockwise from upper left, Logic Pro X Randomizer and Chord Trigger, Cakewalk Data Filter, Echo, and Velocity processor.
Much depends on a plug-in’s individual capabilities, but many allow variations on the echoes—change pitch as notes echo, do transposition, add swing (try that with your audio plug-in equivalent), and more. But if those options aren’t present, there’s still DIY potential because you can render the track with a MIDI plug-in, then tweak the echoes manually. MIDI echo makes it particularly easy to generate staccato, “dugga-dugga-dugga” synth parts that provide rhythmic underpinnings to many dance tracks; the only downside is that long, languid echoes with lots of repeats eat up synth voices.
Experiment with Adding Human “Feel”
A Shift MIDI plug-in shifts note start times forward or backward. This benefits greatly from MIDI plug-ins’ real-time operation because you can listen to the changes in “feel” as you move, for example, a snare hit ahead or behind the beat somewhat.
Remove Glitches
“De-glitcher” plug-ins remove duplicate events that hit on the same beat, filter out notes below a specific duration or velocity, “de-flam” notes to move the start times of multiple out-of-sync notes to the average start time, or other options that help clean up pollution from MIDI data streams.
Constrain Notes to a Scale, and Nuke Wrong Notes
Plug-ins that can snap to scale pull errant notes into a defined scale—just bash away at a keyboard (or have a cat walk across it), and there won’t be any “wrong” notes. Placing this after a randomizer can be very interesting, as it offers the benefits of randomness yet notes are always constrained to particular scales.
Analyze Chords
Put this plug-in on a track, and it will read out the kind of chord made by the track’s notes. With ambiguous chords, the analyzer may display all voicings it recognizes. Aside from figuring out exactly what you played when you had a spurt of inspiration, for those using MIDI backing tracks an analyzer simplifies figuring out chord progressions.
Add an LFO to Just About Anything
Being able to change MIDI parameters rhythmically can add considerable interest and animation to synth modules and MIDI-controllable signal processors. Although some DAWs let you draw in periodic waveforms (and you can always take the time to create a library of MIDI continuous controller signals suitable for pasting into programs), a Continuous Controller generator provides these same functions in a much more convenient package.
The above functions are fairly common—but scratch beneath the surface, and you’ll find all kinds of interesting MIDI plug-ins, either bundled with hosts or available from third parties. Midiplugins.com lists MIDI plug-ins from various companies. Some of the links have disappeared into internet oblivion and some belong to zombie sites, but there are still plenty of potentially useful MIDI effects. More resources are available at midi-plugins.de, (the most current of the sites), and tencrazy.com. Happy data diving!
There’s more to life than audio echo – like MIDI echo
Although the concept of MIDI echo has been around for years, early virtual instruments often didn’t have enough voices to play back new echoes without stealing voices from previous echoes. With today’s powerful computers and instruments, this is less of a problem – so let’s re-visit MIDI echo.
Copy and Drag MIDI Tracks
It’s simple to create MIDI echo: Copy your MIDI track, and then drag the notes for the desired amount of delay compared to the original track. Repeat for as many echoes as you want, then bounce all the parts together (or not, if you think you’ll want to edit the parts further). In the screen shot above, the notes colored red are the original MIDI part, the blue notes are delayed by an eighth note, and the green notes are delayed by a dotted-eighth note. The associated note velocities have also been colored to show the velocity changes for the different echoes.
Change Note Velocities for More Variety
But wait—there’s more! You can not only create polyrhythmic echoes, but also change velocities on the different notes. Although the later echoes can have different dynamics, there’s no law that says all the changes must be uniform. Nor do you have to follow the standard “rules” of echo—consider dragging very low-velocity notes ahead of the beat to give pre-echo.
MIDI Plug-Ins for Echo
Some DAWs that support MIDI plug-ins offer MIDI echo, which sure is convenient. Even if your doesn’t, though, you can always create them manually, as described above. The bottom line is that there are many, many possibilities with MIDI echo…check them out.
Just like MIDI itself, the Korg SQ64 hardware sequencer focuses on connectivity and control
The SQ-64 is unique in its ability to drive MIDI and modular synths letting you create music with all your synth gear without the need for a computer, tablet or cellphone. It features four hardware based sequencer tracks each with up to 64-step sequences
The first three tracks support up to 8-note polyphony, with Mod, Pitch, and Gate outputs for each track. The fourth track is designed to be a monophonic 16-part sequencer, driving eight separate Gate outputs along with eight different MIDI notes — perfect for driving a drum machine or drum synthesis modules. So in total, you can send three polyphonic sequences to three different devices via MIDI or CV/Gate/Mod, plus a monophonic sequence with up to eight different MIDI notes to a MIDI device, plus a monophonic sequence with up to eight different parts sent out via Gate outputs. That’s a lot of creative potential for a compact hardware sequencer!
by Sweetwater
Blending CV/Gate and MIDI control in one portable box
It’s the unique combination of CV control, MIDI & Audio sync, and polyphonic multitrack sequencing that makes the Korg’s SQ-64 special. Check out Korg’s James Sajeva as he demos the SQ64 with a rack of modular synths.
More Unique Step Sequencing features
The SQ64 Step Sequencer has some unique features that are really only available with Step Sequencer- you can set the steps to play backwards (Reverse), play from the beginning to the end and then turn around (Bounce), Stochastic (randomly pick between one step forward, skip one forward, backwards or repeat step) or Random (randomly pick from all the available steps in the track). When you combine that with the ability to do Polyrhythms (Each track can have different lengths) and changing the time divisions of each track independently (1/32, 1/16, 1/8, 1/4 plus Triplets) and there is an endless amount of creative fun available.
DAW software, like Ableton Live, Logic, Pro Tools, Studio One, etc. isn’t just about audio. Virtual instruments that are driven by MIDI data produce sounds in real time, in sync with the rest of your tracks. It’s as if you had a keyboard player in your studio who played along with your tracks, and could play the same part, over and over again, without ever making a mistake or getting tired.
MIDI-compatible controllers, like keyboards, drum pads, mixers, control surfaces, and the like, generate data that represents performance gestures (fig. 1). These include playing notes, moving controls, changing level, adding vibrato, and the like. The computer then uses this data to control virtual instruments and effects.
Figure 1: Native Instruments’ Komplete keyboards generate MIDI data, but can also edit the parameters of virtual instruments.
Virtual Instrument Basics
Virtual instruments “tracks” are not traditional digital audio tracks, but instrument plug-ins, triggered by MIDI data. The instruments exist in software. You can play a virtual instrument in real time, record what you play as data, edit it if desired, and then convert the virtual instrument’s sound to a standard audio track—or let it continue to play back in real time.
Virtual instruments are based on computer algorithms that model or reproduce particular sounds, from ancient analog synthesizers, to sounds that never existed before. The instrument outputs appear in your DAW’s mixer, as if they were audio tracks.
Why MIDI Tracks Are More Editable than Audio Tracks
Virtual instruments are being driven by MIDI data, so editing the data driving an instrument changes a part. This editing can be as simple as transposing to a different key, or as complex as changing an arrangement by cutting, pasting, and processing MIDI data in various ways (fig. 2).
Figure 2: MIDI data in Ableton Live. The rectangles indicate notes, while the line along the bottom show the dynamics for the various notes. All of this data is completely editable.
Because MIDI data can be modified so extensively after being recorded, tracks triggered by MIDI data are far more flexible than audio tracks. For example, if you record a standard electric bass part and decide you should have played the part with a synthesizer bass instead, or used the neck pickup instead of the bridge pickup, you can’t make those changes. But the same MIDI data that drives a virtual bass can just as easily drive a synthesizer, and the virtual bass instrument itself will likely offer the sounds of different pickups.
How DAWs Handle Virtual Instruments
Programs handle virtual instrument plug-ins in two main ways:
The instrument inserts in one track, and a separate MIDI track sends its data to the instrument track.
More commonly, a single track incorporates both the instrument and its MIDI data. The track itself consists of MIDI data. The track output sends audio from the virtual instrument into a mixer channel.
Compared to audio tracks, there are three major differences when mixing with virtual instruments:
The virtual instrument’s audio is typically not recorded as a track, at least initially. Instead, it’s generated by the computer, in real time.
The MIDI data in the track tells the instrument what notes to play, the dynamics, additional articulations, and any other aspects of a musical performance.
In a mixer, a virtual instrument track acts like a regular audio track, because it’s generating audio. You can insert effects in a virtual instrument’s channel, use sends, do panning, automate levels, and so on.
However, after doing all needed editing, it’s a good idea to render (transform) the MIDI part into a standard audio track. This lightens the load on your CPU (virtual instruments often consume a lot of CPU power), and “future-proofs” the part by preserving it as audio. Rendering is also helpful in case the instrument you used to create the part becomes incompatible with newer operating systems or program versions. (With most programs, you can retain the original, non-rendered version if you need to edit it later.)
The Most Important MIDI Data for Virtual Instruments
The two most important parts of the MIDI “language” for mixing with virtual instruments are note data and controller data.
Note data specifies a note’s pitch and dynamics.
Controller data creates modulation signals that vary parameter values. These variations can be periodic, like vibrato that modulates pitch, or arbitrary variations generated by moving a control, like a physical knob or footpedal.
Just as you can vary a channel’s fader to change the channel level, MIDI data can create changes—automated or human-controlled—in signal processors and virtual instruments. These changes add interest to a mix by introducing variations.
Instruments with Multiple Outputs
Many virtual instruments offer multiple outputs, especially if they’re multitimbral (i.e., they can play back different instruments, which receive their data over different MIDI channels). For example, if you’ve loaded bass, piano, and ukulele sounds, each one can have its own output, on its own mixer channel (which will likely be stereo).
However, multitimbral instruments generally have internal mixers as well, where you can set the various instruments’ levels and panning (fig. 3). The mix of the internal sounds appears as a stereo channel in your DAW’s mixer. The instrument will likely incorporate effects, too.
Figure 3: IK Multimedia’s SampleTank can host up to 16 instruments (8 are shown), mix them down to a stereo output, and add effects.
Using a stereo, mixed instrument output has pros and cons.
There’s less clutter in your software mixer, because each instrument sound doesn’t need its own mixer channel.
If you load the instrument preset into a different DAW, the mix settings travel with it.
To adjust levels, the instrument’s user interface has to be open. This takes up screen space.
If the instrument doesn’t include the effects plug-ins needed to create a particular sound, then use the instrument’s individual outputs, and insert effects in your DAW’s mixer channels. (For example, using separate outputs for drum instruments allows adding individual effects to each drum sound.)
Are Virtual Instruments as Good as Physical Instruments?
This is a question that keeps cropping up, and the answer is…it depends. A virtual piano won’t have the resonating wood of a physical piano, but paradoxically, it might sound better in a mix because it was recorded with tremendous care, using the best possible microphones. Also, some virtual instruments would be difficult, or even impossible, to create as physical instruments.
One possible complaint about virtual instruments is that their controls don’t work as smoothly as, for example, analog synthesizers. This is because the control has to be converted into digital data, which is divided into steps. However, the MIDI 2.0 specification increases control resolution dramatically, where the steps are so minuscule that rotating a control feels just like rotating the control on an analog synthesizer.
MIDI 2.0 also makes it easier to integrate physical instruments with DAWs, so they can be treated more like virtual instruments, and offer some of the same advantages. So the bottom line is that the line between physical and virtual instruments continues to blur—and both are essential elements in today’s recordings.
This workshop is part of a series of monthly free live events about MIDI organised by the Music Hackspace
Date & Time: Tuesday 27th April 6pm UK / 7pm Berlin / 1pm NYC / 10am LA
Level: Beginner
Ableton Live offers a vast playground of musical opportunities to create musical compositions and productions. Live’s native MIDI FX provides a range of tools to allow the composer and producer to create ideas in a myriad of ways. Max For Live complements these tools and expands musical possibilities. In this workshop you will creatively explore and deploy a range of MIDI FX in a musical setting. This workshop aims to provide you with suitable skills to utilise the creative possibilities of MIDI FX in the Ableton Live environment.
Session Learning Outcomes
By the end of this session a successful student will be able to:
Identify and deploy MIDI FX
Explore native and M4L MIDI FX in Live
Render the output of MIDI FX into MIDI clips for further manipulation
Apply MIDI FX to create novel musical and sonic elements
Session Study Topics
Using MIDI FX
Native and M4L MIDI FX
Rendering MIDI FX outputs
Creatively using MIDI FX
Requirements
A computer and internet connection
A web cam and mic
A Zoom account
Access to a copy of Live Suite with M4L (i.e. trial or full license)
About the workshop leader
Mel is a London based music producer, vocalist and educator.
She spends most of her time teaching people how to make music with Ableton Live and Push. When she’s not doing any of the above, she makes educational content and helps music teachers and schools integrate technology into their classrooms. She is particularly interested in training and supporting female and non-binary people to succeed in the music world.
MIDI Polyphonic Expression (MPE) offers a vast playground of musical opportunities to create musical compositions and productions. Live 11 supports a range of MPE tools to allow the composer and producer to create ideas in a myriad of ways. In this workshop you will creatively explore and deploy a range of MPE techniques in a musical setting. This workshop aims to provide you with suitable skills to utilise the creative possibilities of MPE in the Ableton Live environment.
Session Learning Outcomes
By the end of this session a successful student will be able to:
Identify the role and function of MPE
Explore MPE compatible devices in Live
Utilize MPE controllers within Live 11
Apply MPE to create novel musical and sonic elements
Session Study Topics
Using MPE
MPE devices in Live
MPE controllers
Creatively using MPE
Requirements
A computer and internet connection
A web cam and mic
A Zoom account
Access to a copy of Live 11 (i.e. trial or full license)
About the workshop leader
Mel is a London based music producer, vocalist and educator.
She spends most of her time teaching people how to make music with Ableton Live and Push. When she’s not doing any of the above, she makes educational content and helps music teachers and schools integrate technology into their classrooms. She is particularly interested in training and supporting female and non-binary people to succeed in the music world.
Whether it was the first introduction of MIDI at the 1983 NAMM to the adoption of MIDI 2.0 at the NAMM 2020, the NAMM (the National Association of Music Merchants has always been a part and a partner in our shared journey together.
At Winter NAMM we always hold a joint meeting between The MIDI Association and AMEI (the Japanese Association of Musical Electronics Industry) which oversees the MIDI spec in Japan. We also hold our Annual General Meeting where the MIDI Association corporate members meet, adopt new specifications and discuss plans for the next year.
This year is different because NAMM is holding an all virtual event called Believe In Music. The event opens on Monday, January 11, 2021 but most of the events take place the week of January 18.
We decided to try to keep thing as normal as possible so here is the schedule of MIDI Association events for Believe in Music week.
MPE is a relatively new MIDI specification, the universal protocol for electronic music. MPE allows digital instruments to behave more like acoustic instruments in terms of spontaneous, polyphonic sound control. So players can modulate parameters like timbre, pitch, and amplitude — all at the same time.
Join Audio Modeling, Keith McMillan Industries, moForte and ROLI and other MPE companies in an exploration of MIDI Polyphonic Expression.
Profile ConfigurationMIDI gear can now have Profiles that can dynamically configure a device for a particular use case. The MIDI Association has adopted our first Profile – Default Controller Mapping and is considering Profiles for Orchestral Articulations, Drawbar Organ, Guitar, Piano, DAW Control, Effects and more.Property ExchangeWhile Profiles set up an entire device, Property Exchange messages provide specific, detailed information sharing. These messages can discover, retrieve, and set many properties like preset names, individual parameter settings, and unique functionalities. For example, your recording software could display everything you need to know about a synthesizer onscreen, effectively bringing hardware synths up to the same level of recallability as their software counterparts.
Property Exchange will bring the same level of recallability that soft synths have to hardware MIDI products
When MIDI first started there was only one transport- the 5 Pin Din cable.But soon there were many different ways to send MIDI messages over USB, RTP, Firewire, and many more cables and transports. But none has been more transformative than MIDI-BLE because it allows you to send MIDI wirelessly over Bluetooth freeing products and performers from the restriction of being tethered to a cable. Join Aodyo, CME, Novalia, Roland, Quicco, Yamaha, and other BLE companies in a discussion of the benefits of BLE-MIDI.
DJ Qbert’s BLE MIDI Interactive Album Cover by Novalia
MIDI 2.0 is bi-directional and changes MIDI from a monologue to a dialog. For example, with the new MIDI-CI (Capability Inquiry) messages, MIDI 2.0 devices can talk to each other, and auto-configure themselves to work together.
Higher Resolution, More Controllers and Better Timing
To deliver an unprecedented level of musical and artistic expressiveness, MIDI 2.0 re-imagines the role of performance controllers, Controllers are now easier to use, and there are more of them: over 32,000 controllers, including controls for individual notes. Enhanced, 32-bit resolution gives controls a smooth, continuous, “analog” feel. New Note-On options were added for articulation control and precise note pitch.
MIDI 2.0 has MIDI 1.0 inside it making translation back and forth easy
2020 was a pretty tough year and everyone was affected by the events that shaped the world.
But 2020 had its positive moments too. So we’d like to focus on the good things that happened during 2020 in the MIDI Association.
At the January 2020 NAMM show, the MIDI Association and AMEI officially adopted MIDI 2.0.
On February 20, 2020 (02-20-2020) we published the first five Core MIDI 2.0 specifications to the world.
In April, the MIDI.org website was selected by the United States Library of Congress for inclusion in the historic collection of Internet materials related to the Professional Organizations for Performing Arts Web Archive.
During May Is MIDI Month, we raised $18,000 and committed to spend that money on people affected by the global pandemic.
In June, at WWDC, Apple announced Big Sur (MacOS 11.0) which includes MIDI-CI support. The OS was released in November. Also in June, the USB-IF published the USB MIDI 2.0 specification.
In September, we did a webinar at the International Game Developers Association on MIDI 2.0 for our Interactive Audio Special Interest Group.
In October, we published a new Specifications area of our website and we have now published 15 MIDI 2.0 specifications.
In December, we announced our NAMM Believe In Music Week participation and the first annual MIDI Innovation Awards.
So in the midst of one of the challenging years in history, we made huge progress in moving MIDI (and the MIDI Association) forward.
To help celebrate, we have arranged for a discount on a great book on MIDI and free attendance at NAMM’s Believe In Music week for all our MIDI Association members,
Welcome to 2021, it is going to be a very significant year in the history of MIDI.
We make Live, Push and Link — unique software and hardware for music creation and performance. With these products, our community of users creates amazing things. Ableton was founded in 1999 and released the first version of Live in 2001. Our products are used by a community of dedicated musicians, sound designers, and artists from across the world.
Making music isn’t easy. It takes time, effort, and learning. But when you’re in the flow, it’s incredibly rewarding.We feel the same way about making Ableton products. The driving force behind Ableton is our passion for what we make, and the people we make it for.
Song Maker Kit
The ROLI Songmaker Kit is comprised of some of the most innovative and portable music-making devices available. It’s centered around the Seaboard Block, a 24-note controller featuring ROLI’s acclaimed keywave playing surface. It’s joined with the Lightpad Block M touch controller, and the Loop Block control module, for comprehensive control over the included Equator and NOISE software. Complete with a protective case, the ROLI Songmaker Kit is a powerful portable music creation system.
The Songmaker Kit also included Ableton Live Lite and Ableton is also a May MIDI Month platinum sponsor.
Brothers Marco and Jack Parisi recreate a Michael Jackson classic hit
Electronic duo PARISI are true virtuosic players of ROLI instruments, whose performances have amazed and astounded audiences all over the world — and their latest rendition of Michael Jackson’s iconic pop hit “Billie Jean” is no exception.
Sometimes you just need to relax and do something cool.
So on Labor day weekend 2020 we shared this video from MEZERG enjoying some cool watermelon, some bright sun and a dip in the pool.
Oh yeah, and MIDI of course!
Want to try it yourself ? Playtronic makes it possible
Playtron is a new type of music device.
Connect Playtron to fruits and play electronic music using online synthesizers or use it as a MIDI controller with any music software and conductive objects.
Buy Playtron or Touchme, two gadgets that lets you play music on any object. We are an international studio dedicated to creating meaningful interactive audio experiences, in collaboration with brands, marketers, museums, galleries, and artists.
The best new way to learn piano. Learning with flowkey is easy and fun. Practice notes and chords interactively and receive instant feedback.
The idea behind flowkey is simple: “learn piano with songs you love.” And the flowkey app makes it easy to learn your favorite songs, whether your level is that of a Beginner, Intermediate, Advanced or Pro piano player!
Discover fascinating piano arrangements tailored to your level. Get started today and play your first song within minutes.
Click on the links below to see the Yamaha keyboards that qualify in your area.
New presets from Jordan Rudess and more for Mac/PC and iOS.
Jordan Rudess recently took the stage with Deep Purple for a festival performance in Mexico City using Hammond B-3X as their sole organ instrument. With great success, the Hammond B-3X fit seamlessly into the performance, nailing every organ sound the band has built their sound upon. Jordan and IK product manager, Erik Norlander, created 24 custom presets for the show with the idea to also release them to all Hammond B-3X users. The presets are automatically installed with the 1.3 update.
– 24 new Jordan Rudess Deep Purple presets – Compatibility with iPad preset sharing – Controllers are now received only on the assigned channels – Pitch bend range is now stored globally
iPad version:
– 24 new Jordan Rudess Deep Purple presets – New share function for importing and exporting presets with desktop version and other iPads – New restore factory presets function – Controllers are now received only on the assigned channels – Pitch bend range is now stored globally
Update your software now to gain all of these added features!
Audio Modeling has been coming out with more and more physical modeled instruments that add incredible realism and expressiveness. Recently they released the Solo Brass Bundle.
You can buy either individual instruments or save money by buying the entire bundle.
We make Live, Push and Link — unique software and hardware for music creation and performance. With these products, our community of users creates amazing things. Ableton was founded in 1999 and released the first version of Live in 2001. Our products are used by a community of dedicated musicians, sound designers, and artists from across the world.
Making music isn’t easy. It takes time, effort, and learning. But when you’re in the flow, it’s incredibly rewarding. We feel the same way about making Ableton products. The driving force behind Ableton is our passion for what we make, and the people we make it for.
Want to connect modular hardware to Ableton Live? There are a number of ways to go about this depending on what software and hardware you have. In this article, we break down the different methods and explain the gear you might need.
Live is fast, fluid and flexible software for music creation and performance. It comes with effects, instruments, sounds and all kinds of creative features—everything you need to make any kind of music.
Create in a traditional linear arrangement, or improvise without the constraints of a timeline in Live’s Session View. Move freely between musical elements and play with ideas, without stopping the music and without breaking your flow.
Ableton and Max for Live
Max For Live puts the vast creative potential of the Max development environment directly inside of Live. It powers a range of instruments and devices in Live Suite. And for those who want to go further, it lets you customize devices, create your own from scratch, and explore another world of devices produced by the Max For Live community.
Ableton makes Push and Live, hardware and software for music production, creation and performance. Ableton´s products are made to inspire creative music-making.
We have actively participated in creating the MIDI 2.0 specifications in the MIDI Manufacturers Association for many years. This year, some specifications will be finalized, and the Bome products will learn new MIDI 2.0 features along that path. The main focus will be on bridging MIDI 1.0 gear with the MIDI 2.0 world: proxying and translation. Existing BomeBox owners will also benefit from these new features by way of free firmware upgrades.
by Florian Bome
The BomeBox is a versatile hardware MIDI router, processor, and translator in a small, robust case. Connect your MIDI gear via MIDI-DIN, USB, Ethernet, and WiFi to the BomeBox and benefit instantly from all its functions. It’s a solution for your MIDI connection needs on stage or in the studio.
In conjunction with the desktop editor software Bome MIDI Translator Pro (sold separately), you can create powerful MIDI mappings, including layerings, MIDI memory, and MIDI logic. A computer is only needed for creating the mapping. Once it is loaded into the BomeBox, a computer is not necessary for operation.
BomeBox Overview
BomeBox Features
Configuration
The BomeBox is configured via a web browser. Just enable the integrated WiFi Hot Spot, connect your cell phone, tablet, or computer to it, and open a web browser to access the easy-to-use web configuration.
MIDI DIN
Connect your MIDI gear to the two standard MIDI DIN input and output ports. If you need more MIDI-DIN ports, use the MIDI Host port!
USB Host
The USB Host port allows you to connect any (class compliant) USB-MIDI device to the BomeBox, and use the advanced MIDI router and processing.
USB Hubs
Using a USB hub, you can connect even more USB-MIDI devices to a BomeBox. The MIDI Router allows fine grained routing control for every connected MIDI device individually.
MIDI Router
The integrated MIDI Router gives you full control over which MIDI device talks to which other MIDI device connected to the BomeBox. And if you need more fine grained filtering, or routing by MIDI channel, note number, etc., see Processing below.
Network MIDI Support
The BomeBox has two Ethernet ports. You can use Ethernet to directly connect BomeBox to BomeBox or to a computer. Using the Bome Network tool (see below), all BomeBoxes are auto-discovered. Once set up (“paired”), Network MIDI connections are persistent across reboots and BomeBox power cycles.
Wireless MIDI
The BomeBox’ integrated WiFi HotSpot can also be used for wireless MIDI connections to computers and/or to other BomeBoxes. You can also configure the BomeBox to be a WiFi client for integration into existing WiFi networks.
Processing
The powerful MIDI processing of Bome MIDI Translator Pro is available in the BomeBox. Hundreds of thousands of processing entries can be stored on the BomeBox.
Incoming Actions:
MIDI messages
Keystrokes (on QWERTY keyboard or number pad)
Data on Serial Port
Timed events
Enable/disable translation preset
Scripting (“Rules”):
A sequence of rules can be defined to be processed if the incoming action matches:
assignments of variables, e.g. pp = 20
simple expressions, e.g. pp = og + 128
labels and goto, e.g. goto “2nd Options”
conditional execution, e.g. IF pp < 20 THEN do not execute Outgoing Action
Outgoing Actions:
Send MIDI messages
Send bytes or text to Serial Ports
Create/start/stop timer
Enable/disable translation preset
Open another translation project
Keystroke (QWERTY) Input Support
Connect a (wireless) computer keyboard or a number pad to the BomeBox, then use the processing capabilities to convert to MIDI or trigger other actions! Really? Yes! and it’s useful… sometimes!
RS-232 Serial Port Support
The BomeBox also supports RS-232 adapters to be plugged into the USB host port. Now all processing actions are available in conjunction with serial ports, too: convert serial data to MIDI and vice versa. Route Serial port data via Ethernet. Or integrate older mixing consoles which only talk RS-232.
Allen & Heath Digital Mixer Support
Last, but not least, the BomeBox has built-in support for Allen & Heath mixers connected via Ethernet. They’re auto-discovered, and once you’ve paired them, all the MIDI routing and processing is available to the connected A&H mixer, too!
Bome Network
The standard edition of the Bome Network tool allows connecting your computer to one or more BomeBoxes via Ethernet and WiFi. Any MIDI application can send MIDI to the BomeBox and receive from it. On the BomeBox, you can configure which MIDI stream is sent to a particular connected computer.
BomeBoxes are auto-discovered, and once you’ve established a connection (“paired”), it is persistent across reboots and BomeBox power cycles.
If you like to set up network MIDI connections from computer to computer, use the Add-On Bome Network Pro.
Bome Network is available for Windows and for macOS.
Take your MIDI gear to the next level! Bome Software creates software and hardware for custom interaction with your MIDI devices and the computer. Used by live sound engineers, controllerists, DJ’s, theaters and opera houses, lighting engineers, beat boxers, performance artists, music and broadcasting studios, and many others.
We have actively participated in creating the MIDI 2.0 specifications in the MIDI Manufacturers Association for many years. This year, some specifications will be finalized, and the Bome products will learn new MIDI 2.0 features along that path. The main focus will be on bridging MIDI 1.0 gear with the MIDI 2.0 world: proxying and translation. Existing BomeBox owners will also benefit from these new features by way of free firmware upgrades.
by Florian Bome
The BomeBox is a versatile hardware MIDI router, processor, and translator in a small, robust case. Connect your MIDI gear via MIDI-DIN, USB, Ethernet, and WiFi to the BomeBox and benefit instantly from all its functions. It’s a solution for your MIDI connection needs on stage or in the studio.
In conjunction with the desktop editor software Bome MIDI Translator Pro (sold separately), you can create powerful MIDI mappings, including layerings, MIDI memory, and MIDI logic. A computer is only needed for creating the mapping. Once it is loaded into the BomeBox, a computer is not necessary for operation.
BomeBox Overview
BomeBox Features
Configuration
The BomeBox is configured via a web browser. Just enable the integrated WiFi Hot Spot, connect your cell phone, tablet, or computer to it, and open a web browser to access the easy-to-use web configuration.
MIDI DIN
Connect your MIDI gear to the two standard MIDI DIN input and output ports. If you need more MIDI-DIN ports, use the MIDI Host port!
USB Host
The USB Host port allows you to connect any (class compliant) USB-MIDI device to the BomeBox, and use the advanced MIDI router and processing.
USB Hubs
Using a USB hub, you can connect even more USB-MIDI devices to a BomeBox. The MIDI Router allows fine grained routing control for every connected MIDI device individually.
MIDI Router
The integrated MIDI Router gives you full control over which MIDI device talks to which other MIDI device connected to the BomeBox. And if you need more fine grained filtering, or routing by MIDI channel, note number, etc., see Processing below.
Network MIDI Support
The BomeBox has two Ethernet ports. You can use Ethernet to directly connect BomeBox to BomeBox or to a computer. Using the Bome Network tool (see below), all BomeBoxes are auto-discovered. Once set up (“paired”), Network MIDI connections are persistent across reboots and BomeBox power cycles.
Wireless MIDI
The BomeBox’ integrated WiFi HotSpot can also be used for wireless MIDI connections to computers and/or to other BomeBoxes. You can also configure the BomeBox to be a WiFi client for integration into existing WiFi networks.
Processing
The powerful MIDI processing of Bome MIDI Translator Pro is available in the BomeBox. Hundreds of thousands of processing entries can be stored on the BomeBox.
Incoming Actions:
MIDI messages
Keystrokes (on QWERTY keyboard or number pad)
Data on Serial Port
Timed events
Enable/disable translation preset
Scripting (“Rules”):
A sequence of rules can be defined to be processed if the incoming action matches:
assignments of variables, e.g. pp = 20
simple expressions, e.g. pp = og + 128
labels and goto, e.g. goto “2nd Options”
conditional execution, e.g. IF pp < 20 THEN do not execute Outgoing Action
Outgoing Actions:
Send MIDI messages
Send bytes or text to Serial Ports
Create/start/stop timer
Enable/disable translation preset
Open another translation project
Keystroke (QWERTY) Input Support
Connect a (wireless) computer keyboard or a number pad to the BomeBox, then use the processing capabilities to convert to MIDI or trigger other actions! Really? Yes! and it’s useful… sometimes!
RS-232 Serial Port Support
The BomeBox also supports RS-232 adapters to be plugged into the USB host port. Now all processing actions are available in conjunction with serial ports, too: convert serial data to MIDI and vice versa. Route Serial port data via Ethernet. Or integrate older mixing consoles which only talk RS-232.
Allen & Heath Digital Mixer Support
Last, but not least, the BomeBox has built-in support for Allen & Heath mixers connected via Ethernet. They’re auto-discovered, and once you’ve paired them, all the MIDI routing and processing is available to the connected A&H mixer, too!
Bome Network
The standard edition of the Bome Network tool allows connecting your computer to one or more BomeBoxes via Ethernet and WiFi. Any MIDI application can send MIDI to the BomeBox and receive from it. On the BomeBox, you can configure which MIDI stream is sent to a particular connected computer.
BomeBoxes are auto-discovered, and once you’ve established a connection (“paired”), it is persistent across reboots and BomeBox power cycles.
If you like to set up network MIDI connections from computer to computer, use the Add-On Bome Network Pro.
Bome Network is available for Windows and for macOS.
Take your MIDI gear to the next level! Bome Software creates software and hardware for custom interaction with your MIDI devices and the computer. Used by live sound engineers, controllerists, DJ’s, theaters and opera houses, lighting engineers, beat boxers, performance artists, music and broadcasting studios, and many others.
Wondering how to connect and control your hardware and software instruments in one place? Want to remotely control your Yamaha synthesizers and quickly recall presets on stage? How about attaching a lead sheet or music score with your own notes to a set of sounds?
Camelot Pro and Yamaha have teamed up with special features for Yamaha Synth owners.
REGISTER AND GET CAMELOT PRO FOR MAC OS OR WINDOWS
Download your Camelot Pro copy now with a special offer for Yamaha Synth owners: try the full version FREE for three months with an option to purchase for 40% off.
The promo is valid from:
The promo is valid from:
October 1, 2019 to September 30, 2020..
Upgrade your live performance experience to the next level:
Build your live set list with ease
Manage your Yamaha instruments using smart maps (no programming skills required!)
Combine, layer and split software instruments with your Yamaha synths
Get rid of standard connection limits with Camelot Advanced MIDI routing
Attach music scores or chords to any scene
The real slick thing about the combination of the Yamaha synths and Camelot Pro is that it allows you to very easily integrate your hardware synths and VST/AU plugins for live performance. The Yamaha synths connect to your computer via USB and integrate digital audio and MIDI. So just connect your computer to your Yamaha synth and then your Yamaha synth to your sound system. Camelot allows you to integrate your hardware and software in complex splits and layers and everything comes out the analog outputs of your Yamaha synth.
If you have Cubase/Nuendo, take advantage of the special 50% off promotion that Steinberg is running until June 30 on VST Connect Pro.
If you are a musician who works with producers who use Cubase/Nuendo, you can download VST Connect Performer for free and do studio sessions from the comfort of your home.
Music with no boundaries
VST Connect Pro lets you expand your studio from its physical location to cover the whole world. It allows any musician with a computer, an internet link and the free VST Connect Performer app to be recorded direct on your studio DAW, even if they are on a different continent, because VST Connect Pro makes distance irrelevant. Not only that, but you can see and talk to each other, while the producer has full control over the recording session at both ends of the connection, including cue mix and talkback level.
Multi-track remote recording
Is a musician you want to work with thousands of miles away? No problem. Remote record in real time and the uncompressed audio files are loaded automatically in the background. And you never need to worry about the Internet connection – all VST Connect Performer HD recordings are saved on the musician’s local hard drive and can be reloaded into VST Connect Pro at any time. Worried about security? Don’t be – the unique data encryption system means that your work will always stay yours.
MIDI around the world
VST Connect Pro allows you to record MIDI and audio data live from a VST instrument loaded into VST Connect Performer, anywhere in the world. The artist can even connect a MIDI controller, leaving the session admin to record the incoming MIDI data directly in Cubase, together with the audio stream from the VST instrument.
It also works both ways – send MIDI data from your Cubase project, via VST Connect, to any MIDI compatible instrument or VST instrument connected to a remote instance of VST Connect Performer and record the incoming audio signal.
VST Connect Performer
VST Connect Performer is a license-free, DAW-independent application for the musician being recorded to connect directly into your VST Connect Pro recording session. Available for PC, Mac or iPad, VST Connect Performer is remotely controlled from VST Connect Pro, freeing the musician to concentrate on their performance, be it vocals or an instrument sent as an audio signal. MIDI data or VST instruments can also be played in real time from VST Connect Performer to the VST Connect Pro session in real time. Meanwhile, VST Connect Manager helps you to maintain an overview of your recordings.
VST Connect offers you a fundamental kind of improvement that goes beyond the studio realm. Simply put, I have much more time for my kids now. For something as abstract as a feature in a DAW to have that kind of effect on one’s private life is quite an astonishing achievement. I can’t think of anything comparable.
Safe Spacer™ is a new, lightweight wearable device that helps workers and visitors maintain safe social distancing, enabling MI and other industries to safely re-open and operate with peace of mind.
Using Ultra-wideband technology, Safe Spacer runs wirelessly on a rechargeable battery and precisely senses when other devices come within 2m/6ft, alerting wearers with a choice of visual, vibrating or audio alarm.
Simple to use, Safe Spacer features a patent-pending algorithm that works immediately out of the box, with no set-up or special infrastructure needed and can be comfortably worn on a wristband, with a lanyard, or carried in a pocket. It offers ultra-precise measurement down to 10cm/4” – ten times more accurate than Bluetooth applications.
Ideal for factories, warehouses and offices, Safe Spacer can also be used by visitors of public spaces such as music schools, large retailers, auditoriums, workshops spaces and more. Engineered for fast, easy disinfection, it’s also waterproof. For minimal handling, Safe Spacer works wirelessly via NFC contactless technology or Bluetooth.
Each Safe Spacer also features a unique ID tag and built-in memory that can be optionally associated to workers’ names for tracing any unintentional contact, to keep organizations and their employees secure. To maintain the highest standard of privacy, no data other than the Safe Spacer ID and proximity is stored.
For advanced use, set-up and monitoring in workspaces, an iOS/Android app is also available to allow human resources or safety departments to associate IDs to specific workers, log daily tracing without collecting sensitive data, configure the alarms, set custom distance and alert thresholds, export log data and more.
We created Safe Spacer to help our Italian factory workers maintain safe distance during re-opening. It’s easy to use, fast to deploy, private and secure, so it can be used comfortably in any situation. We hope this solution helps other companies feel secure as they re-open, too.”
Way back in 1996 — around the time electricity was discovered and cell phones were the size of your average 4-slot toaster — two Italian engineers got together to solve a problem in a recording studio. Could you get the sound of classic analog gear from a computer? One of them said (in Italian, of course) “Could we emulate electronic circuits using DSP algorithms and feed an audio signal through the computer and get the same sound?” The answer was yes, the piece of gear they emulated was a vintage Abbey Road console, and a company was born.
Although that’s a pretty simplified version of how IK came to be, it reflects the driving philosophy behind all of our products: give musicians the tools they want/need to be creative and productive.
Recreate classic legendary products in the digital world and make them available to all musicians. But make them simple. Make them both aspirational and affordable. And make them for Musicians First.
iRig Keys I/O
The iRig® Keys I/O series evolves the concept of traditional controllers as the only one available on the market that integrates 25 or 49 full sized keys together with a fully-fledged professional audio interface featuring 24-bit audio up to 96kHz sampling rate, balanced stereo and headphone outputs, plus a combo input jack for line, instrument or mic input (with Phantom power.)
The first Lightning/USB compatible mobile MIDI interface that works with all generations of iOS devices, Android (via optional OTG to Mini-DIN cable) as well as Mac and PC. It features everything you loved about iRig MIDI but with even greater pocketability, connectivity and control.
Simply put, it’s the perfect MIDI solution for the musician on the move.
Syntronik is a cutting-edge virtual synthesizer that raises the bar in sound quality and flexibility thanks to the most advanced sampling techniques combined with a new hybrid sample and modeling synthesis engine. Watch as legendary keyboardist Jordan Rudess demonstrates his own Syntronik presets using the legendary synth powerhouse and SampleTank 3. See how a master keyboard player uses IK’s synth and workstation products to make great music.
The Bob Moog Foundation and the MIDI Association have had a close working relationship for many years. When we talked to Michelle Moog-Koussa, she graciously agreed to provide some materials on synthesizers for the May Is MIDI Month 2020 promotion.
The series of posters in this article are available for purchaseherewith the proceeds going to the Moog Foundation.
We have combined it with Ableton’s excellent interactive website for Learning Synths, Google’s Chrome Music Lab, and text from synth master Jerry Kovarsky, monthly columnist for Electronic Musician Magazine and author of Keyboard For Dummies.
Together these elements come together to make a great introduction to synthesis appropriate for students and musicians of all ages and levels. There are links to more information in each section.
MIMM 2020 Webinar The MiniMoog- The Synth That Changed the World Saturday, May 9, 10 am Pacific
Join us this Saturday at 10 am Pacific, 1 PM Eastern and 6 PM Greenwich on MIDI Llve to hear a panel discussion about the Minimoog, one of the most influential synths of all time.
Panelists include Michelle Moog Koussa and David Mash from the Bob Moog Foundation Board of Directors, Amos Gaynes and Steve Dunnington from Moog Music, and synth artists and sound designers Jack Hoptop, senior sound designer for Korg USA, Jordan Rudess, keyboardist for Dream Theatre and President of Wizdom Music (Makers of MorphWiz, SampleWiz, HarmonyWiz, Jordantron), and Huston Singletary, US lead clinician and training specialist for Ableton Inc.
href=”images/easyblog_articles/918/b2ap3_large_Jerry_Piano_Crop800.jpeg”
title=”Jerry Kovarsky-
Author of Keyboard For Dummies”>
Jerry Kovarsky-
Author of Keyboard For Dummies
href=”images/easyblog_articles/918/b2ap3_large_David-Mas_20200508-200241_1.jpeg”
title=”David Mash- President of the Bob Moog Foundation”>
David Mash- President of the Bob Moog Foundation
href=”images/easyblog_articles/918/b2ap3_large_Michelle.jpeg”
title=”Michelle Moog Koussa-Executive Director of the Bob Moog Foundation”>
Michelle Moog Koussa-Executive Director of the Bob Moog Foundation
href=”images/easyblog_articles/918/b2ap3_large_Jordan.jpeg”
title=”Jordan Rudess- Keyboardist for Dream Theatre “>
Jordan Rudess- Keyboardist for Dream Theatre
href=”images/easyblog_articles/918/b2ap3_large_Jack-Hotop-MF.jpeg”
title=”Jack Hotop-Senior Sound Designer for Korg USA”>
href=”images/easyblog_articles/918/b2ap3_large_Huston-headshot-with-Moog-.png”
title=”Huston Singletary-Lead Sound Designer for Ableton”>
Huston Singletary-Lead Sound Designer for Ableton
Composer Alex Wurman Provides Sonic Meditation For All Mothers as Part of Moogmentum in Place
The Bob Moog Foundation is proud to announce that EMMY® Award Winning composer Alex Wurman will perform a Facebook live stream concert to benefit the Foundation on Saturday, May 9th at 8pm (ET) / 5pm (PT), the eve before Mother’s Day. Wurman will inspire a worldwide audience with ASonic Meditation for All Mothers on a Yamaha Disklavier and a Moog Voyager synthesizer. The performance and accompanying question and answer, which will last approximately an hour, is meant to offer musical solace during these times of difficulty.
Listen to the Synth sound in the video and then check it out for yourself via the link below.
Learn about synthesizers via Ableton’s interactive website. Play with a synth in your browser and learn to use the various parts of a synth to make your own sounds.
A waveform is a visual representation of a continuous tone that you can hear. In analog synthesis the waveforms are somewhat simple and repetitious (with the exception of noise), because that was easier to generate electronically. But any sustaining, or ongoing sound can be analyzed and represented as a waveform. So any type of synthesizer has what are referred to as waveforms, even though they may be generated by sampling (audio recordings of sound), analog circuitry, DSP-generated signals, and various forms of digital sound manipulation (FM, Phase Modulation, Phase Distortion, Wavetables, Additive Synthesis, Spectral Resynthesis and much more). However they are created, we generally refer to the sonic building block of sound as a waveform.
Simply stated, an oscillator is the electronic device, or part of a software synthesizer design that generates a waveform. In an analog synthesizer it is a physical circuit made up of electronic components. In digital/DSP-driven synthesizers (including soft synths) it is a part of the software code that is instructed/coded to produce a waveform, or tone.
Learn about synthesizers via Ableton’s interactive website. Play with a synth in your browser and learn to use the various parts of a synth to make your own sounds.
Learn about synthesizers via Ableton’s interactive website. Play with a synth in your browser and learn to use the various parts of a synth to make your own sounds.
Harmonics are the building blocks of sound that make one instrument, or waveform sound different from another. The level of each harmonic as they exist in nature (the harmonic series) together determine the timbral “fingerprint” of a sound, so we can recognize the difference between a clarinet and a piano. Often these harmonics change in their volume level and tuning as a sound develops, and might decay away: the more this happens the more complex, and “alive” a sound will seem to our ears. You can now go back to the original Waveform poster and understand that it is the harmonic “signature” of each waveform that gives it the sonic characteristics that we used to describe each one.
The general dictionary definition of a filter is a device that when things pass through it, the device may hold back, lessen or remove some of what passes through it. In synthesis a filter is used to reshape the harmonic content from the oscillator-generated waveform. The above poster describes three of the most common types of filters from analog synthesis, but many more have been developed which have different characteristics.Different brands of synthesizers have their own filter designs that have a special sound, and many of those classic designs are much sought-after and emulated in modern digital and software synthesizers.
Learn about synthesizers via Ableton’s interactive website. Play with a synth in your browser and learn to use the various parts of a synth to make your own sounds.
Learn about synthesizers via Ableton’s interactive website. Play with a synth in your browser and learn to use the various parts of a synth to make your own sounds.
The poster says it straight up – an amp increases and decreases volume of the sound that is output by the oscillator. If the sound only stayed at a single level as determined by the amp level sounds would be pretty boring. Thankfully we have many ways to vary that sound output, via envelopes, LFOs, step-sequencers and more. Read on…
Learn about synthesizers via Ableton’s interactive website. Play with a synth in your browser and learn to use the various parts of a synth to make your own sounds.
An envelope (originally called a contour generator by Bob Moog!) is a building block of a synthesizer that changes the level of something over time. This is needed to recreate the complex characteristics of different sounds. The three main aspects of a sound that are usually shaped in this way are pitch (oscillator frequency), timbre (filter cutoff) and volume (amp level). Just describing the volume characteristics of a sound, some instruments keep sustaining (like a pipe organ), others decay in volume over time (a plucked string of a guitar, or a struck piano note). In modern synthesizers, and in modular synths an envelope can usually be routed to most any parameter to change its value over time. The poster describes what is called an ADSR envelope, but there are many types, some with many more steps able to be defined, and on the flip side some are simpler, with only Attack and Release stages.
Learn about synthesizers via Ableton’s interactive website. Play with a synth in your browser and learn to use the various parts of a synth to make your own sounds.
An LFO is another type of oscillator that is dedicated for use to modulate, or affect another parameter of the sound in a cyclic fashion (meaning it keeps repeating).So it seems related to the function of envelopes, but it behaves differently in the sense that you can’t shape it as finitely. Yet it is easier to use for simple repeatable things like vibrato (pitch modulation), tremolo (amp level modulation), and panning (changing the amp output from left to right in a stereo field).
How can we use MIDI to interact with these parameters?
The most common use of MIDI to affect these parameters is to map, or assign a physical controller on your keyboard or control surface to directly control a given parameter. We do this when we don’t have the instrument right in front of us (it may be a rack-mount device, or a soft-synth), or it doesn’t have many knobs/slider/controls on the front panel.You would use CC numbers (Control Change) and match up the controller object (slider, encoder, whatever) to the destination parameter you wish to control.
Then when you move the controller it sends a steady stream of values (128 to be exact) to move/change the destination. A device may have those CC numbers hard set, or they can be freely assigned. Most soft synths have a “learn” function, where the synth “listens” or waits to receive an incoming MIDI message and then sets is automatically, so you don’t even need to know what CC number is being used.
Some synths use what are called RPN (Registered Parameter Numbers) and NRPN (Non-Registered Parameter Numbers) to control parameters. While more complicated to set up, these types of message offer finer resolution than CCs (16,384 steps), but do the same thing. Soon there will be MIDI 2.0 which brings 32 bit resolution or 2,147,483,647 steps. Yes, that number is correct!
From a performance standpoint, a cool benefit of using MIDI to control a parameter is you can choose to have a different type of controller interact with the given parameter than your hardware device offers. Some people like to use a ribbon to do pitch bends rather than a wheel. Or to sweep the cutoff of a filter using an X/Y pad rather than a knob. Or route keyboard after-touch to bring in vibrato or tremolo rather than a Mod Wheel (OK, this one went beyond using CCs but you get the picture).
Another nice way to use MIDI is to assign sliders or knobs to an ADSR envelope in a product that doesn’t already have dedicated knobs to control the stages. So now you can easily soften, or slow up the attack on a sound (or speed it up), lengthen or tighten up the release (what happens when you take your finger off the key).
Using MIDI really becomes an aid when I am recording. If were to record only audio, as I play a synth I need to get all of my interactions with the sound perfect during the performance. My pitch bends, my choices of when to add vibrato and how much to add, and any other interactions I want to make with the sound. I can’t fix them later, as they are forever frozen in the audio I recorded. If I capture my performance using MIDI, each of those aspects are recorded as different types of MIDI messages/data, and I can then go back in and adjust them later. Too much vibrato on that one note? Go into event edit and find the stream of MIDI CC#1 messages and adjust it to taste. Even better, I can record my performance and not worry about other gestures/manipulation I might want to make, and then go back and overdub, or add them in later. So I can manipulate the sound and performance in ways that would be impossible to do in real-time. When I get the performance shaped exactly as I want it, I can then bounce the MIDI track to audio and I’m done. Thank you MIDI!
by Jerry Kovarsky, Musician and Author
A Brief History of the Minimoog Part I
Follow the life of the Minimoog Synthesizer from its inception through its prolific contributions to poplular music throughout the last 4 decades. In this first installment documenting the journey of the Minimoog synth through the 1970’s, we explore the musicians and the people that were instrumental in bringing the instrument to prominence. We also sit with one of Moog Music’s earliest engineers, Bill Hemsath, who recalls the process of the Minimoog’s birth and sheds some light on what sets the Moog synthesizer apart from other analog synths.
by Moog Music
A Brief History of the MiniMoog Part II
Chronicling the influential artists who used the Minimoog Model D to explore new genres and discover the sounds of tomorrow.
Since 1979, we’ve helped music makers all across the world build their dreams. We are a team of gear heads who are committed to doing the right thing for our customers.
We are musicians, engineers, producers, Julliard grads, Grammy winners, mothers, fathers, sons and daughters. We are diverse in our backgrounds and beliefs, but we’re all bound by the same goal- Do the right thing, for the customer.
Sweetwater offers customers a free 2-year warranty on nearly everything we sell, free shipping, 24/7 technical support and the dedicated support of our Sales Engineers. Visit us at Sweetwater.com, or give us a call at 800.222.4700 to see how we can help you achieve your creative goals.
Sweetwater Resources
Sweetwater MIDI Interface Buying Guide
How to Choose a MIDI Interface
When MIDI (Musical Instrument Digital Interface) was developed over 30 years ago, it resulted in a flood of music technology. Software DAWs have long replaced the hardware sequencers of the twentieth century, bringing an ever-increasing demand for effective ways to get MIDI in and out of computers.
MIDI keyboard controllers have become an important part of the music-making process for contemporary musicians and producers due to the increasing use of virtual instruments onstage and in the studio.
Knowing the ranges that instruments and voices occupy in the frequency spectrum is essential for any mixing engineer. Sweetwater has put together a Music Instrument Frequency Cheatsheet, listing common sources and their “magic frequencies” — boost/cut points that will produce pleasing results. Just remember to trust your own ears!
You can download the PDF of this chart by clicking here and then print it out.
Since its launch in 1997, Sweetwater’s Word for the Day feature has presented nearly 4,900 music and audio technology terms. Our definitions can help you cut through industry jargon, so you can understand what’s going on.
Moog Music is the leading producer of analog synthesizers in the world. The employee-owned company and its customers carry on the legacy of its founder, electronic musical instrument pioneer, Dr. Bob Moog. All of Moog’s instruments are hand built in its factory on the edge of downtown Asheville, NC
Moog Subsequent 25
Subsequent 25
Subsequent 25 is a 2-note paraphonic analog synthesizer that melds the hands-on analog soul of classic Moog instruments with the convenience and workflow of a modern sound-design machine. Moog’s most compact keyboard synthesizer, the Subsequent 25 delivers all of the rich sonic density that Moog synthesizers are known for.
Moog One® is the ultimate Moog synthesizer – a tri-timbral, polyphonic, analog dream-synth designed to inspire imagination, stimulate creativity, and unlock portals to vast new realms of sonic potential.
The Moog Factory in Asheville, NC has resumed production of the highly sought-after Moog 16 Channel Vocoder, an instrument which continuously analyzes the timbral characteristics of one sound (Program) and impresses these timbral characteristics upon a second signal (Carrier).
The Moog Factory in Asheville, NC has resumed production of the highly sought-after Moog 16 Channel Vocoder, an instrument which continuously analyzes the timbral characteristics of one sound (Program) and impresses these timbral characteristics upon a second signal (Carrier). Originally introduced in 1978, and famously heard on Giorgio Moroder’s E=MC2, this model has been used to transmute vocals, transform synthesizers, and electronically encode sound for over 40 years.
Melodics is modern learning for modern instruments
Melodics is modern learning for modern instruments, supporting MIDI Keyboards, Pad Controllers, and electronic drum kits. It’s structured learning for solid progress. Melodics takes the “but where do I start?” out of learning music. Start with a genre you love, or a technique you want to master. Whatever your skill level, there’s something there. Then take a course – Melodics courses take you on a journey, teaching you everything you want to know about a genre or concept.
by Melodics
Founder and CEO Sam Gribben
Melodics was founded by Sam Gribben, the former CEO of Serato and one of the people responsible for the digital DJ revolution and controllerism. So it’s not surprising that Melodics started with finger drumming on pad controllers.
Melodics hardware partners
It’s also not surprising that Sam took a page out of the Serato playbook and worked with well established hardware companies to create value add bundles with Melodics™. Here is a list of some of the companies that Melodics™ works with.
Because of the relationships he built up in ten years at Serato, Melodics has a stellar collection of artists that contribute lessons and content for the Melodics™ platform. This is just a small example the Melodics artist roster.
Melodics™ started with training for Pad Controllers like Ableton Push and Native Instruments Maschine. They have guides on techniques and correct posture. Long story short, they treat these new controllers as legitimate musical instruments that you need to practice and learn to play exactly the same way you would with a traditional instrument like a cello or a clarinet.
Melodics for Electronic Drums
Melodics™ is a perfect practice partner for someone with electronic drums.
Melodics™ for Keyboards
Melodics™ has a unique interface for keyboards that shows you what notes are coming next.
Melodics and MIDI
Melodics™ uses MIDI for all of it’s core functionality. SysEx is used to identify what device is connected and automatically configure the hardware controls. The lessons are MIDI based so Melodics™ can look at your performance and compare it to the notes in the MIDI file. So Melodics™ can determine if you played the right note and whether you played early or late and provide an ongoing report on your musical progress.
MIDI underpins everything we do, from the lesson creation process, to how we play back the lessons and display feedback, to how we interact with the instruments. Under the hood, Melodics is a midi sampler. We take the input from what the student is playing, compare that to the midi in the lesson we created, and show the student how they are doing compared to a perfect performance.
by Melodics
Get started for free!
You can download and start learning with Melodics at no charge.
The KMI K-Board Pro 4 started off as Kickstarter campaign in 2016 and quickly got to its funding goal of $50.000. What sets the Pro 4 apart from other controllers is KMI’s patented Smart Sensor Fabric technology which is a unique and proprietary conductive material that changes resistance as it is compressed.
KMI’s patented Smart Sensor Fabric technology
Expressive
KBP4 has Smart Fabric Sensors under each key bringing five dimensions of expressivity to your playing
Playable
KBP4 is configured like a traditional keyboard, giving you a familiar playing surface so you can start expressing yourself immediately.
Programmable
The KBP4 Editor Software works with Mac, Windows, or in a web browser to fully customize every element of the KBP4 playing experience
Every K-Board Pro 4 ships with a free license for Bitwig Studio 8-Track
Bitwig Studio 8-Track, the trim and effective digital audio workstation to start producing, performing, and designing sounds like a pro. 8-Track includes a large selection of Bitwig devices for use on up to eight project tracks with audio or MIDI. Plug in your controller, record your instrument, produce simple arrangements, design new sounds, or just jam.
Bitwig Studio 8-Track is the sketch pad for your musical ideas featuring the acclaimed workflow of Bitwig Studio.
Bitwig Studio 8-Track is available exclusively through bundles with selected partners.
K-Board Pro 4 , BitWig and MPE Expressions
KMI put together a tutorial to show how to setup the K-Board with Bitwig to take advantage of MPE’s advance MIDI expression capabilities,
Ólafur Arnalds didn’t start out playing keyboards. He started out as a drummer in hard rock bands. He is not alone. Yoshiki from the legendary Japanese hard rock band X Japan comes to mind. Many people forget that the piano is classified as a percussion instrument along with marimbas and vibraphones.
He has unique approach to music that combines technology and a traditional almost classical approach to composition. He also is one of the few people still using the MOOG Piano bar, a product developed by Bob Moog and Don Buchla (now discontinued) to turn any piano into a MIDI device.
Photo: Richard Ecclestone
What’s behind bleep bloop pianos
In many interviews, Ólafur says that his acoustic pianos bleep and bloop.
In these two Youtube video, he explains how MIDI technology is a core part of his creative process. What is is interesting is how organic and emotional the resulting music is. The technology nevers get in the way of the art and only compliments it.
This video explains how the three acoustic pianos are connected by MIDI.
I am in constant search of new ways to approach art with technology, interaction and creativity.
by Halldór Eldjárn
Halldór Eldjárn is another Icelandic artist who worked on the All Strings Attached project and developed some robotic MIDI instruments for the project.
Ólafur Arnalds on NPR’s Tiny Desk Concerts
To see a complete performance of this unique use of MIDI processing, listen to this performance on NPR Music Tiny Desk Concerts.
How to Bleep (and Bloop) yourself
Arnalds has released a library of sounds for Spitfire Audio recorded at his studio on his ‘felted’ grand piano along with added content in the Composers Toolkit.
Recently MIDI Manufacturer Association member Blokas released the Midihub, a MIDI router and processor. In our article on the MIDIhub, Loopop explains how to use the Midihub to create some Olafur Arnalds inspired MIDI effects of your own.
The SHARC® Audio Module is an expandable hardware/software platform enabling project prototyping, development and deployment of audio applications including effects processors; multi-channel audio systems; MIDI synthesizers/controllers, and many other DSP/MIDI-based audio projects.
The centerpiece of the SHARC Audio Module is Analog Devices’ high-performance SHARC ADSP-SC589. Combining two 450 MHz floating point DSP cores, a 450MHz ARM® Cortex®-A5 core and an FFT/IFFT accelerator with a massive amount of on-board I/O, the ADSP-SC589 is a remarkable engine for audio processing.
This development platform is designed for the experienced programmer and is supported with an extensive wiki that includes a bare metal, light-weight C / C++ framework designed for efficient audio signal processing with lots of example code and numerous tutorials and videos. These tutorials include audio processing basics, effects creation and a simple MIDI synthesizer.
In addition, the SHARC Audio Module supports the MicroPython programming language and Faust, a functional programming language, specifically designed for real-time audio signal processing and synthesis.
The SHARC Audio Module from Analog Devicescomes complete with a license-free Eclipse development environment (CCES) and a free in-circuit emulator. Also available is the Audio Project Fin – a must-have add-on board for serious MIDI developers with 5-pin MIDI Din, ¼ balanced audio, control pots, switches and a prototyping area.The best news is that both boards can be had for less than $300 total!
British mega band MUSE is currently on tour promoting their latest album Simulation Theory performing in sold out stadiums all over the world. Each night frontman and guitarist Matt Bellamy brings out a one of a kind guitar with a special history to play the song “The Dark Side.” While Bellamy is happy with the result, reporting that “the guitar works great!” the story of how this guitar was conceived and built is just a few short weeks is very interesting.
Matt Bellamy, being the perfectionist that he is, wants the sounds he created in the studio on stage as much as possible. One essential part of his sound is the Arturia Prophet V synthesizer. Being a user of Fishman’s TriplePlay MIDI guitar pickup & controller, both on stage and in the studio, he wanted to continue to use that to play the Arturia synth live, but without distance, range, cables and a computer getting in the way of his stage performance.
When Matt told me he absolutely wanted to use the Prophet V softsynth live on tour but still be able to move around the stage without any restrictions, I knew we had to find a new kind of solution that would take the computer out of the picture.
by Muse guitar tech Chris Whitemyer
Chris Whitemyer was aware of Swedish music tech company MIND Music Labs and how their ELK MusicOS could run existing plugins and instruments on hardware. Thinking MIND might be the missing piece of the puzzle he approached them at the 2019 NAMM Show. Together with Fishman and Arturia, a first meeting was held in the MIND Music Labs booth on the show floor. That meeting, which took place just a few weeks before the start of Muse’s 2019 World Tour, kicked off several hectic weeks resulting in the three companies producing a new kind of guitar just in time for the tour’s first date in Houston, TX.
Going to that first meeting at NAMM I didn’t know what to expect, but as soon as we plugged in the guitar with our TriplePlay system in the Powered by ELK audio interface board, it was pretty clear that the Fishman and ELK systems would be compatible.”
What was clear after the first meeting was that the reliability of the Fishman TriplePlay MIDI Guitar Controller in combination with ELKs ability to run existing plugins inside the guitar could open up a new world for performers like Matt Bellamy. And with the tour just weeks away, a plan was hatched to get the system finalized and ready for use in the most demanding of conditions – a world tour of arenas and stadiums.
Only days after the closing of the NAMM Show, MIND Music Labs CTO Stefano Zambon flew to Fishman’s Andover, MA headquarters to figure out how to get a powered by ELK audio board inside a guitar, that not only plays well enough to satisfy a world class performer, but could also control the Arturia Prophet V at extremely low latency. In short, redefine the state of the art for synth guitars.
Getting three different companies to join forces on a special project like this does not happen very often, so this was truly special. To go from a first meeting at NAMM to a functioning system in just weeks was a mind-blowing achievement. It required the special expertise and focused efforts of all three companies to pull it off – I can still hardly believe we did.
To see one of our V Collection classic products like the Prophet V on Stage with Muse is very exciting. The fact that it is that same plugin running in the guitar as you use in the studio really makes all the difference. I mean, Matt Bellamy even uses the same preset in the studio!”
by Arturia CEO Frédéric Brun
On February 22nd, just 4 weeks after the first initial meeting at NAMM, MUSE went on stage in Houston in front of a jam-packed Toyota Center. Seven songs into the show Chris Whitemyer handed Matt Bellamy the new guitar for the song “The Dark Side”
When all the guys got together to build this, we didn’t tell Matt that a new guitar was going to be built or maybe not built. I just gave it to him for the first show and told him he could walk as far as he wanted on stage. He just said ‘Oh, Cool!'”
I had no doubt in my mind it would work and it performed flawlessly. When I first got the guitar one week before the first show I tested it very thoroughly, leaving it on for four hours, turning it off and on fifty or more times, and jumping up and down with it and bouncing it off a mattress. It passed all the tests. The guitar is rock solid! Matt and I couldn’t be happier. It does everything I hoped it would and it’s on stage every night.
by Muse guitar tech Chris Whitemyer
If you want to see this unique guitar in action it will be on MUSE’s Simulation Theory World Tour in the U.S. through May, then in Europe all summer and in South and Central America this fall.
You may not know it, but a lot of the software you use may be made by the same system, JUCE. JUCE is used for the development of desktop and mobile applications.
The aim of JUCE is to allow software to be written such that the same code will can run identically on Windows, Mac OS X and Linux platforms. It supports various development environments and compilers.
Juce not only teaches you how to build audio apps and synths, but also how to control them with MIDI.
Dave Zicarelli from Cycling 74′ and Brett Porter from Art and Logic use Juce
Why does that matter? Both David and Brett are in the MIDI 2.0 prototyping working group. Because a lot of the MIDI 2.0 prototyping work that they are doing is being done in Juce, it will support various development environments and compilers. Tools like Juce weren’t available back in 1982!
Melodics™ is a desktop app that teaches you to play MIDI keyboards, pad controllers, and drums.
Melodics works with any MIDI capable keyboard, pad controller, or drum kit. It has plug & play support for the most popular devices on the planet and custom remapping for everything else.
It’s free to download, and comes with 60 free lessons to get you started.
With acoustic instruments, playing in time comes naturally. You can jump in when the time’s right, and everyone keeps their flow. Playing together with electronic instruments hasn’t always been so easy. Now Link makes it effortless.
Link is a technology that keeps devices in time over a local network, so you can forget the hassle of setting up and focus on playing music. Link is now part of Live, and also comes as a built-in feature of other software and hardware for music making.
Join the session
Hop on to the same network and jam with others using multiple devices running Link-enabled software. While others play, anyone can start and stop their part; or start and stop multiple Link-running applications at the same time. And anyone can adjust the tempo and the rest will follow. No MIDI cables, no installation, just free-flowing sync that works.
With Live and beyond
People make music using a range of instruments, so Link helps you play together using a range of devices. A growing number of music applications have Link built in, which means anyone on the same network can play them in time with Live. You can even use Link without Live in your setup: play Link-enabled software in time using multiple devices, or multiple applications on the same device.
Push is an instrument that puts everything you need to make music in one place—at your fingertips
Making music is hard. To stay in the flow, you need to be able to capture your ideas quickly, and you need technology to stay out of the way. Computers make it possible for one person to create whole worlds of sound. But instruments are where inspiration comes from. Push gives you the best of everything. It’s a powerful, expressive instrument that gives you hands-on control of an unlimited palette of sounds, without needing to look at a computer.
Spend less time with the computer when composing ideas, editing MIDI or shaping and mixing sounds. Browse, preview and load samples, then slice and play them on 64 responsive pads. Play and program beats, melodies and harmonies. See everything you do directly on Push’s multicolor display. Integration with Live is as tight as possible, which means what you do on Push is like putting your hands directly on the software.
Ableton Push 2 Key Features:
Hardware instrument for hands-on playability with Ableton Live
Simultaneously sequence notes and play them in from the same pad layout
Creative sampling workflows: slice, play and manipulate samples from Push
Navigate and refine your music in context directly with advanced visualization on the Push multicolor display
64 velocity- and pressure-sensitive backlit pads
8 touch-sensitive encoders for controlling mixer, devices and instruments, and Live browser navigation
Launch clips from the pads for jamming, live performance or arrangement recording
Scales mode offers a unique approach to playing notes and chords
Includes Beat Tools—a toolkit for beatmakers with more than 150 drum kits and instruments, 180 audio loops and much more
Includes Live 10 Intro for new users
Push gives you the best of both worlds for making music: inspiring hardware for hands-on control at the beginning, and full-featured music creation software for fine-tuning the details at the end.
Push is the music making instrument that perfectly integrates with Ableton Live. Make a song from scratch with hands on control of melody, beats and structure.
NKS is an integration technology developed by Native Instruments
NKS brings all your software instruments, effects, loops and samples, into one intuitive workflow – creating seamless integration between NI and other leading developers. It gives you streamlined browsing, consistent tagging, instant sound previews, pre-mapped parameters, Smart Play features, and more. NKS also connects all your favorite tools to our KOMPLETE KONTROL keyboards and software, MASCHINE, and third-party controllers.So If you see the NKS logo, you know what to expect: An intuitive and comfortable workflow that makes it easy to bring your sound to life.
by Native Instruments
BROWSE BETTER AND FASTER THAN EVER
Hear instant audio previews as you scroll through thousands of patches, from hundreds of instruments, from over 75 developers.
EVERYTHING IS PRE-MAPPED
Start playing and tweaking instantly – just load an instrument or an effect and go. Each parameter is pre-mapped to the hardware, with the mappings designed by the developers themselves.
PLAY COMPLEX MUSIC EASILY
The KOMPLETE KONTROL software lets you play intricate chord progressions and arpeggios, even without musical training, with single finger control. NKS helps bring out the music in you.
DEEPER CONTROL
The Light Guide on the KOMPLETE KONTROL S-Series keyboards lets you see – and control – a range of deeper settings including articulations, keyswitches, and more.
The Sonogenic is not just a MIDI controller, it has built-in sounds, speakers and USB Audio/MIDI connectivity
The SHS-500 has everything you need to start playing right away all built-in to the compact “keytar” form factor.
Sonogenic Red and Black
Sonogenic Controls
The Sonogenic has both Audio (stereo 44.1kHz) and MIDI USB capabilities and lots of connectivity
The SHS-500 Sonogenic connectivity
The SHS-500 features Bluetooth MIDI for wireless iOS connectivity
The Chord Tracker App
Chord Tracker is an app that analyzes the songs in your music library and nearly instantaneously shows you the musical structure of each in the form of an easy-to-understand chord chart like this:
Chord Tracker
Sample Tank 3
Sonogenic SHS500 Features:
37-note keytar with Bluetooth MIDI for wireless iOS connectvity
JAM mode lets you focus on playing rhythms while the Sonogenic takes care of playing the correct notes of songs
37 mini keys that play like a full-sized keyboard
Modulation wheel lets you control the amount of modulation effect on your sound
The USB-to-Host port connects to a wide variety of educational, creative, and entertaining musical applications on your computer or mobile device
3.5mm AUX input for connecting a portable music player, iOS device, mixer, or computer for audio playback via internal speakers
¼” AUX Line output jacks for connecting to an external amp or PA system without disabling the onboard speakers
Included AC adapter, MIDI breakout cable, neck strap
Though Roland co-invented the Musical Instrument Digital Interface (MIDI) well over three decades ago, it’s still an integral part of new products and is as useful to musicians as ever. A prime example is the tiny but mighty Roland VT-4 Voice Transformer, a portable effects box for the instrument inside us all—the human voice.
Today’s musical styles increasingly use unusual vocal sounds with heavy processing, making them stand out and grab the listener’s attention. With the Roland VT-4, you have a wealth of modern and retro vocal effects at your fingertips, with no need for a complicated setup using a computer and plug-ins. The VT-4 has everything from delay and reverb to mind-bending formant and vocoding effects. Better still, the Roland VT-4’s performance-oriented interface lets you ride the controls while you sing to constantly alter the sound to suit the track and enhance the vibe of your performance.
But what if you need more control over your pitch or the voicings of your vocal harmonies? That’s where MIDI comes in.
While the Roland VT-4 works great on its own and can harmonize and vocode without any input other than your voice, plugging a MIDI keyboard opens even more expressive possibilities. Through MIDI you can control the Auto-Pitch, harmony, and vocoder engines in real time with the notes you play from a connected controller. You can hard-tune your voice to specific notes as you sing or create instant MIDI-controlled melodies and multi-part harmonies with voicings that follow your chords, and it is SO simple to get set up!
Supported by MIDI, the Roland VT-4 Voice Transformer brings real time vocal processing (including vocoding!) into the 21st Century.
One of the biggest recen developments in MIDI is MIDI Polyphonic Expression (MPE). MPE is a method of using MIDI which enables multidimensional controllers to control multiple parameters of every note within MPE-compatible software.
It has never been as easy to stay “in the box” as it is now. There are lots of software virtual instruments out there; some emulate hardware instruments, and others offer completely new sounds. That said, there’s something special about performing on a synthesizer or MIDI instrument with its own sound engine that’s difficult, if not impossible, to capture in software. And just as software instruments keep getting better, hardware MIDI instruments have never been better or more affordable. Here are ways you can record your MIDI instrument, depending on the features.
Recording a MIDI Instrument with USB Audio and MIDI
If your MIDI instrument has a USB port that can both send and receive MIDI and audio data, you’re in luck! Recording this device will be a breeze. First, connect the USB port on your instrument to a USB port on your computer. Then make sure that your DAW sees the USB ports of your instrument as both audio and MIDI devices. You’ll want to set up an instrument track to record and play back the MIDI data from your instrument, and to accept the audio input coming from the USB audio connection as well. This allows for the most flexible use of your MIDI instrument possible — you can record it, edit the recorded MIDI notes, and then hear the resulting edited audio coming back from your instrument.
Recording a MIDI Instrument with USB MIDI Only
Many MIDI instruments that have USB ports will only send and receive MIDI data over USB. This isn’t quite as convenient as if your instrument could send both audio and MIDI over USB, but it’s still easy to work with. First, connect the USB port of your instrument to a USB port on your computer, and connect the audio outputs of your instrument to audio inputs on your audio interface. Next, set up a MIDI track in your DAW to record and play back the MIDI data from the USB connection of your instrument. Then set up an audio track in your DAW to record the audio inputs on your interface that you’ve connected your instrument to. Now your MIDI track will record and then play back MIDI to your instrument over USB, and your audio track will record the audio output from your instrument. Although connecting everything is a bit more complicated with this method, you’ll still be able to record, edit the recorded MIDI notes, and then hear the resulting edited audio coming back from your instrument.
Recording a MIDI Instrument with No USB Ports
Some MIDI instruments, especially older ones, don’t have any USB ports at all. They will usually use the original 5-pin DIN MIDI ports. This requires a little extra gear but is fundamentally the same as recording a MIDI instrument with USB MIDI only. The big difference is that you’ll need a separate USB MIDI interface to send and receive MIDI between your instrument and computer. Some audio interfaces may come with a built-in 5-pin DIN MIDI interface; otherwise, you can purchase a dedicated one. You can buy inexpensive MIDI interfaces with a single MIDI in and MIDI out port, such as the M-Audio MIDISport 2 x 2, or fully featured rackmounted MIDI interfaces, such as the MOTU MIDI Express series with up to 8 x 8 MIDI ports, depending on how many MIDI devices without USB you have. Once you have your MIDI devices connected to your computer via a USB MIDI interface, the rest of the process is identical to the prior method: recording a MIDI instrument with USB MIDI only.
It might take a little more planning to record a hardware MIDI instrument, but the expression potential and the often unbeatable sound quality make it worth it. Don’t let the fact that the sounds aren’t inside your computer scare you off; recording MIDI instruments is easy!
BLOCKS is a modular music making system made up of 5 components
Seaboard Block Super Powered Keyboard
Multi-award-winning Seaboard interface
5D Touch technology
24 keywave, two-octave playing surface
Hundreds of free sounds
Suite of music making software for desktop and mobile
Wireless and portable for making music on the go
Connects to other Blocks
Lightpad Block Expressive Musical Touchpad
Touch responsive soft silicon playing surface
LED illumination reconfigures Lightpad M for different notes and scales
Adaptable surface can become a drum pad, fader bank, effects launcher and more
Hundreds of free sounds
Suite of music making software for desktop and mobile
Wireless and portable for making music on the go
Connects to other Blocks
Perform with the Live Block
The Live Block is for performance. The buttons let you switch scales and octaves, trigger chords and arpeggios, and sustain notes in real time.
Touch Block-Add Expression Faster
Touch Block helps you adjust the expressive behavior of your Seaboard Block and Lightpad Block. Turn up or turn down the responsiveness of the surface to the Strike, Glide, Slide, Press, and Lift dimensions of touch. Maximize the depth of expression available through pressure, or minimize the pitch-bend effect of sideways movements. Customize your control of any sound in real time and on the fly.
Loop Block-Produce Faster
Loop Block helps you produce a track faster. Record loops and play them back. Set your tempo, and quantize your loops so they’re always in time.
ROLI Dashboard
Customize BLOCKS and the Seaboard RISE for your workflow
Blocks become open-ended MIDI control surfaces through ROLI Dashboard. Customize the LED-illuminated Lightpad Block by loading different apps, including a note grid, a bank of faders and more. Use Control Blocks as CC controllers for your favorite DAW.
MIDI Controller/Audio Interface for mobile musician
The iRIg Keys I/O comes in two version a a 25-key MIDI controller version and a 49-key MIDI controller version. Both feature built-in audio interfaces with 24-bit/96kHz sound quality, a Neutrik combo input, and phantom power and eight touch-sensitive RGB LED backlit drum pads.
iRIG I/O 25
iRIG I/O 49
Complete suite of music production software included
The iRig Keys I/O 25 comes with all the software you need to start creating music. Ableton Live Lite is the perfect DAW to get started with and IK Multimedia adds RackS Deluxe with 10 i mixing and mastering tools and SampleTank 3 with 4,000 rinstruments, 2,500 rhythm loops, and 2,000 MIDI files. If you are mobile musician, SampleTank iOS for iPad and iPhone is a full-featured mobile sound and groove production studio.
Ableton Live lite
Sample Tank 3
T-RackS Deluxe
IK Multimedia iRig Keys I/O 49 Features:
MIDI controller with 49 full-size, velocity-sensitive keys
8 touch-sensitive RGB LED backlit drum pads for beat creation
Touch-sensitive sliders and buttons plus touch-sensitive rotary controllers for controlling soft synths and other apps
Built-in USB audio interface features excellent 24-bit/96kHz sound quality
Neutrik combo input with phantom power handles nearly any microphone or instrument
Stereo line output and headphone jack provides ample monitoring options
This panel discussion will also include live and video performances from the participants.
Panelists: Jordan Rudess, Pat Scandalis, Alon Ilsar, Keith Groover, Qianqian Jin, Nathan Asman
The Glide, GeoShred and Airsticks win Guthman New Instrument Competition
On March 9th at the Georgia Tech Center for Music Technology, three judges with audience input selected the three winners of the 2019 Guthman New Instrument contest .
All three judges are people who are heavily involved with MIDI.
Pamela Z Composer, Performer, Media Artist Roger Linn Technical Grammy Award Winner Ge Wang Associate Professor, Stanford University
The Glide was conceived, designed, and coded by Keith Groover, a musician, music educator, and inventor living in South Carolina. There are two controllers, one for each hand, and each controller has three accelerometers (for the X, Y, and Z axes.) It is primarily designed to be a MIDI controller broadcasting over bluetooth, which means that you pair it with a phone, tablet, or computer and then play through a synthesizer app. Here is a video on how it works.
Jordan Rudess is no stranger to MIDI.org. We have done exclusive interviews with him. HIs videos of playing a number of MPE instruments are featured in our articles on MPE. Now his GeoShred app has won 2nd place in the 2019 Guthman New instrument Competition. GeoShred is highly expressive when controlling, and being controlled by, instruments that use the “MPE” MIDI specification (MIDI Polyphonic Expression). It’s both a powerful synth and a formidable iPad based MIDI/MPE controller!
The AirSticks combine the physicality of drumming with the unlimited possibilities of computer music, taking the practice of real-time electronic music to a new realm.
The AirSticks were developed by drummer/ electronic producer Alon Ilsar and computer programmer/ composer Mark Havryliv. Airsticks transform off-the-shelf gaming controllers into a unique musical instrument,
The QJin was developed by Qianqian Jin, a student in the Technology and Applied Composition (TAC) of San Francisco Conservatory of Music The Qijin is a customized MIDI controller for a Guzheng (a Chinese classical zither). It is not only a MIDI controller , but it has a built-in amplification system to augment its capacity for live performance and sound design. A built-in arduino board that supports MIDI allows the performer to connect to any MIDI compatible music software.
The Kaurios gets its name from the amazingly unique wood that it is made out of. Kauri is the oldest wood available in the world and has been buried underground in New Zealand for about 50,000 years. So Nathan Asman’s project marries ancient wood with state of the art wireless BTLE MIDI technology.
This custom-built instrument is called Curve, and is named after the shape and contour of the interface itself. I wanted to create something that had a myriad of different sensors and ways of controlling different musical parameters, while also mai
The tagline for the Margaret Guthman New Instrument Competition is “the future of music” and all three winners of the 2019 competition were MIDI controllers. So the future of music is MIDI. We couldn’t agree more.
Controllerism May 4, 2019 at 3 PM Pacific Time A panel discussion with the people who created the Controllerism movement about how MIDI influences the world of Digital DJs.
Laura Escudé, Sam Gribbens, Huston Singletary, Moldover, Kate Stone, Shawn Wasabi
Panelists
Laura Escudé
International music producer, DJ, controllerist, violinist and live show designer Laura Escudé aka Alluxe has been an important figure in some of the most revered concerts around the globe, DJing, programming and designing shows for the likes of Kanye West, Jay Z, Miguel, Charli XCX, Demi Lovato, Iggy Azalea, Yeah Yeah Yeahs, Herbie Hancock, Cat Power, Bon Iver, Drake, The Weeknd, Silversun Pickups, Garbage, Childish Gambino and M83. Escudé is a classically trained violinist, an Ableton Certified Trainer and is the CEO of Electronic Creatives, a team of some of the most talented and sought after programmers and controllerists in the business.
Sam Gribbens
Sam was the CEO of Serrato when the Controllism movement began. He then went on to found Melodics™. Having finished up at Serato after a decade at the helm, Sam was ready for something new. He’d worked with some of the biggest artists in the music world, and with the international companies who built the instruments & controllers they used. Along the way he noticed how important pad & cue point drumming was becoming in the overlapping worlds of DJing & production. Thus, an idea was born.
Huston Singletary
Sound designer, producer, film composer, product specialist, clinician, and programmer, Huston Singletary, has been affiliated with the best of the best in the sound design/synth world. Toontrack, Izotope, Synthogy, Native Instruments, Roland, Alesis, and Spectrasonics.
Moldover
History only notes a handful of artists who successfully pushed the limits – both with their music and the design of their musical instruments. What Bach was to the keyboard and Hendrix was to the guitar, Moldover is to the controller. Disillusioned with “press play DJs”, Moldover fans eagerly welcome electronic music’s return to virtuosity, improvisation, and emotional authenticity. Dig deeper into Moldover’s world and you’ll uncover a subversive cultural icon who is jolting new life into physical media with “Playable Packaging”, sparking beautiful collaborations with his custom “Jamboxes”, and drawing wave after wave of followers with an open-source approach to sharing his methods and madness.
Kate Stone
Dr. Kate Stone, founder of Novalia, works at the intersection of ordinary printing and electronics to make our current analogue world come alive through interaction. Novalia creates paper thin self-adhesive touch sensors from printed conductive ink and attached silicon microcontroller modules. Their control modules use Bluetooth MIDI connectivity. “Novalia’s technology adds touch, connectivity and data to surfaces around us. We play in the space between the physical and digital using beautiful, tactile printed touch sensors to connect people, places and objects. Touching our print either triggers sounds from its surface or sends information to the internet. From postcard to bus shelter size, our interactive print is often as thin as a piece of paper. Let’s blend science with design to create experiences indistinguishable from magic.”
Shawn Wasabi
Shawn Wasabi is an Artist/Producer/Visionary of Filipino decent from the city of Salinas, California. He first awed the Internet world with his release of “Marble Soda”, using the rare Midi Fighter 64, co-designed by Shawn. Using this one of a kind machine, Shawn reached 1 million views on Youtube within 48 hours of “Marble Soda” being uploaded.
On the heels of “Marble Soda” success he went on to release 7 more original songs amassing over 100 million Youtube in the span of 3 years. Shawn went on to create an original visual element that blends video games, animation and music together. With his visual brand, Shawn Wasabi’s has culminated a demand for his services as a studio music producer, which resulted in famed Songwriter Justin Tranter signing him to an exclusive publishing deal with Facet Music/Warner Chappell.
With K-Board Pro 4 we’ve taken the format of a traditional keyboard and updated it for the 21st Century. With our SmartFabric™ Sensors underneath each key you can tweak any synthesis parameter in real time by moving your fingers while you are playing. The MIDI MPE Standard is the future for expressive controllers and we have designed the K-Board Pro 4 to be the ultimate MPE Controller.
by Keith McMillen
Multidimensional Expression
The Keith McMillen Instruments K-Board Pro 4 is a 4-octave MIDI keyboard controller with multidimensional touch sensitivity in each key. K-Board Pro 4 supports MIDI Polyphonic Expression (MPE) that allows additional gestures individually on each key. You can wiggle your finger horizontally to generate MIDI CC commands, slide vertically to open up a filter, or apply pressure to control volume. For non-MPE synths, the K-Board Pro 4 provides fully featured polyphonic aftertouch. The data from each gesture is completely assignable and sent individually per note.
Keith McMillen Instruments K-Board Pro 4 Features:
Provides a level of expressiveness previously attainable only with acoustic instruments
Support for MPE (MIDI polyphonic expression) protocol
SmartFabric sensors underneath each key
Transmits attack and release velocity and continuous pressure, as well as horizontal and vertical position data
48 resilient silicone keys and no moving parts for superior durability
USB powered; class compliant
MacOS/Windows, iOS/Android compatibility
SmartFabric sensor technology
Under each key is Keith McMillen Instruments’ patented Smart Sensor Fabric technology which let you tweak any synthesis parameter in real time simply by moving your fingers while you are playing.
The K-Board Pro 4 is USB powered and class compliant to ensure compatibility with MacOS, Windows, iOS, and Android, as well as all MIDI-enabled hardware.
Editors in OSX, Windows and Web MIDI formats
Keith McMillan Instruments provides editors for OSX and Windows, but you can also edit and update your K Board Pro 4 directly online using Web MIDI.
After many years, Moog releases a polyphonic analog synth
The Moog One is a programmable, tri-timbral analog synth featuring an intuitive tactile interface that allows you to explore a vast sonic universe of classic Moog analog circuits that have been know for many years for their unrivaled punch and rich harmonics,
An advanced sound architecture comes in 16 voice and 8 voice versions
The 16 voice allows sixteen complete voices simultaneously and the 8 voice allows eight. Each voice features three state-of-the-art analog voltage-controlled oscillators (VCOs), two independent analog filters (a Variable State filter and the famous Moog Ladder Filter) that can be run in series or parallel, a dual-source variable analog noise generator, an analog mixer with external audio input, four LFOs, and three envelope generators.
You can split or layer three different timbres — each with its own sequencer, arpeggiator, and onboard effects library — across the premium 61-note Fatar keyboard with velocity and aftertouch.
Moog One Analog Synthesizer Features:
8- or 16-voice polyphony
3 VCOs per voice with waveshape mixing and OLED displays
Unison mode (up to 48 oscillators on the 16-voice instrument)
2 filters per voice with filter mixing (2 multimode State Variable filters that function as a single filter, and a classic lowpass/highpass Moog Ladder filter)
3 DAHDSR envelopes per voice with user-definable curves
3-part multitimbrality
Separate sequencer and arpeggiator per timbre
Chord memory
Dual-source noise generator with dedicated envelope
Mixer with external audio input
Ring modulation with selectable routing
Oscillator FM and hard sync with selectable routing
4 assignable LFOs
Premium 61-note Fatar TP-8S keybed with velocity and aftertouch
Assignable pressure-sensitive X/Y pad
Digital Effects (Synth and Master Bus)
Eventide reverbs
Selectable glide types
USB and DIN MIDI
Save, categorize, and recall tens of thousands of presets
Create Performance Sets that make up to 64 presets accessible at the push of a button
2 x ¼” stereo headphone outputs
2 pairs of assignable ¼” outputs (supports TRS and TS)
4 x ¼” hardware inserts (TRS)
1 x ¼” external audio input (line-level)
1 XLR + ¼” TRS combo external audio input with trim knob
9 assignable CV/GATE I/O (5-in/4-out)
USB drive support for system and preset backup
LAN port for future expansion
Amos Gaynes on the Moog One
Amos Gaynes works for Moog Music and he is also the chairman of the MIDI Manufacturers Association’s Technical Standards Board. Here he talks about the development of the Moog One,
UNO Drum marries analog sounds and digital control
The UNO Drum features six true analog voices — kick, snares, claps, and hi-hats — plus there are 54 PCM samples — toms, rims, ride, and cowbell — derived from IK’s popular SampleTank 4. Because the UNO has 11-voice polyphony you can even layer the analog and PCM sounds together.
The analog section was designed by Soundmachines who also collaborated with IK Multimedia on the UNO Synth.
IK Multimedia UNO Drum Features:
Drum machine with analog engine plus 54 PCM samples
6 analog voices designed by Soundmachines
54 PCM samples derived from SampleTank 4
Layer analog and PCM sounds together with 11-voice polyphony
Loads of sound-shaping tools, including tune, snap, and decay for every sound, and global drive and compression effects
12 touch-sensitive pads with dual velocity zones
4 dynamic encoders
Stutter, random, and roll effects for spicing things up
64-step sequencer with 8 parameter automations per step
Record by step or in real-time
Save and recall 100 patterns and 100 drum kits
Song mode chains up to 64 patterns together in any order
Integrates with your rig via USB, 2.5mm MIDI I/O, and audio pass-through
Runs off battery or USB bus power
Integrates in any Live, Studio, or Mobile Set-up
The UNO Drum features USB and traditional MIDI via 2.5mm jacks (the cables are included) so it’s easy to integrate with our Mac/PC, iOS device or traditional outboard MIDI gear.
UNO also offers Audio in with compression to daisy chain with other gear.
Fig. 1: The orange notes overlap the attacks of subsequent notes. The white notes are trimmed to avoid this.
Most bass lines are single notes, and because bassists lift fingers, mute strings, and pick, there’s going to be a space between notes. Go through your MIDI sequence note by note and make sure that no note extends over another note’s attack (Fig. 1). If two notes play together, you’ll hear a momentary note collision that doesn’t sound like a real bass. I’ll even increase the gap between notes slightly if the notes are far apart.
2. Squeeze every drop out of your track
Fig. 2: Studio One’s Transform tool makes it easy to compress values by raising the tool’s lower boundary.
Great bassists are known for their touch — the ability to play notes with consistent timing and dynamics. It can sometimes be harder to play keyboard notes consistently than bass strings, which brings us to MIDI velocity compression.
Audio compression can give more consistent levels, but it doesn’t give a more consistent touch; that has to happen at the source. Some recording software programs have either MIDI FX or editing commands to compress data by raising low-level notes and/or reducing high-level notes (Fig. 2). But if your program doesn’t have velocity compression, there’s an easy solution: add a constant to all velocity values for “MIDI limiting.”
For example, suppose the bass part’s softest note velocity is 70, and the highest is 110 — a difference of 40. Add 35 to all values, and now your softest velocity is 70+35=105, and your highest is 110+35=145, but velocity can’t go higher than 127 — so you have instant “MIDI limiting.” Now your highest-velocity note is 127, and there’s only a difference of 22 between the highest and lowest notes. If you want to go back to making sure the highest-level note is 110, then subtract 17 from all values. Your highest-level note is now at 110, but the lowest-level note is 88 — still a difference of 22 instead of 40.
This doesn’t necessarily preclude adding audio compression, but you’ll probably need to add less of it, and the sound will be more natural.
These kinds of techniques work, perhaps with slight modifications, with many software programs. For example, when editing MIDI dynamics, although Studio One’s Transform tool shown above gives very intuitive visual feedback, Cubase and Digital Performer have very flexible ways to control MIDI dynamics, and Ableton Live’s Velocity MIDI effect even lets you sculpt velocity curves.
3. If it’s a Synth Bass
It means you can probably modulate synth parameters with velocity. When creating sampled bass instruments, rather than go through the hassle of multi-sampling different velocities, I sample each individual note plucked strongly and then tie sample start time, level, and filter cutoff to note velocity to create the dynamics. Although the sound may arguably not be as realistic as something with four billion round-robin samples, I find this approach to be more expressive overall because any synth module changes tied to dynamics are continuous.
4. Slippin’ and Slidin’
Slides are an important bass technique — not just slides up or down a string, but over a semitone or more when transitioning between notes. For example, when going from A to C, you can extend the A MIDI note and use pitch bend to slide it up to C (remember to add a pitch bend of 0 after the note ends). Also, all my sampled bass instruments have sampled down and up/down slides for each string. Throw those in from time to time, and people swear it’s a real bass. Unless you’re emulating a fretless bass, you want a stepped, not continuous, slide to emulate sliding over frets, but you don’t want to re-trigger the note at each step. There are several ways to do this.
Fig. 3: Studio One’s Presence XT instrument has glide. Enable it, set a very short glide time, and add a very slight overlap between notes — the 1-measure slide shown here goes from C to G. The last note does not overlap with the G; this gap between notes allows the G note to re-trigger.
If the bass instrument has a legato mode, you can do a slide by adding notes at individual semitones to create the slide, and then using legato mode to avoid having the notes re-trigger. Legato mode does require an overlap between notes, but it can be very short.
Glide will also work under the same conditions, but you need to set the Glide time to minimum (Fig. 3). If your program doesn’t interpolate between pitch-bend messages (or you can turn off smoothing for the pitch-bend function), quantizing pitch-bend slide messages so they’re stepped is another solution, but this one doesn’t require entering extra notes. For example, with a virtual instrument’s pitch bend set to +/-12 semitones, quantizing the bend to 1/32 triplets will give exactly 12 steps in an octave-up slide that lasts one beat, while a 1/16 note triplet gives 12 steps in an octave-up slide that lasts two beats, or
Just draw a stepped pitch bend.
Then again, you might want to emulate a fretless bass and have continuous slides.
Fig. 4: Use these pitch-bend values to slide a precise number of semitones.
For precise slides, Figure 4 shows the amount of pitch-bend change per semitone when using a pitch-bend range of +/-12 semitones (recommended for bass to make these kinds of slides possible). For example, if an octave is a pitch-bend value of 8191 and you want to start a slide three semitones above the note where you want to land, start at a pitch-bend value of +2048 and end with a pitch-bend value of 0. If you want to step the part (this assumes you can turn off pitch-bend smoothing or enter precise values in an Event List), add equally spaced events at +1366, +683, and just before the final note, 0.
5. Mod Wheels Are Not for Vibrato
Dubstep people have figured this out — they eschew vibrato for tremolo or “filtrato.” With bass, I use the mod wheel for what I feel are more useful effects:
Roll off treble as the wheel rolls further away to emulate a traditional bass tone control
Mix in a sub-octave for an octave-divided bass sound
Alter tremolo depth to add pulsed tremolo sparingly
Increase drive to an amp sim to give more “growl”
Because you’ll likely be playing single notes for bass line, your other hand will be free to work the mod wheel and increase expressiveness even further — and that’s a good thing.
Yamaha has a number of mobile apps for their DTX electronic drums to make drumming more fun while helping you to get better!
DTXM12 Touch
The DTXM12 Touch app not only lets you edit the pads with a touchscreen interface but also adds new features that expand its functionality in live performance situations. When the DTX-MULTI 12 is connected to an iPad or iPhone via USB, drummers can now trigger song playback and backing tracks from their music library using the pads, and then mix the audio through the stereo auxiliary input! Additionally, the app includes a mixer for all the sounds of a kit, including up to four sounds per pad, and access to every parameter of the instrument. It also lets users quickly see what voices are assigned to the pads on the touchscreen.
DTX502 Touch
The DTX502 Touch app lets drummers take control of the DTX502 drum trigger module using their iOS’ touch-screen interface when connected via USB. Now it’s even easier to create custom user kits, layer and cross-fade two different sounds per pad, and program up to 30 click and tempo settings for instant recall. The app also serves as a conduit for downloading new kits in a wide range of styles from YamahaDTX.com. In addition, the app has a unique Hybrid Setup wizard that helps drummers calibrate custom trigger settings quickly for their DTX 502-series kit, or any combination of electronic pads and acoustic drum triggers!
DTX402
With the DTX402 touch app, the creative possibilities are nearly limitless. Fine tune your DTX402 series kit to precision. Change the sounds for any of the 10 built in kits or individual pads, set custom tunings, volume settings and more. Access the Trigger setup, Reverb and Pedal settings with a single touch, and adjust the virtual position of the open hi-hat. You can even set the volume for the on board “Voice Guidance” training system. The 402 touch app also has 10 built in play along songs, designed to make you a more well rounded, diversified drummer. Choose to play along with either the pre-recorded drums with those songs as a practice reference, or use the option to mute the pre-recorded drums and take on the show for yourself. The app has a big focus on education ,and offers 10 challenge mode practice exercises, covering a variety of important skills and topics every drummer should strive for.
Song Beats
Song Beats is an iPhone app that supports your drum performance by visualizing which drums to hit and when to hit them while playing along with your favorite songs. The app also allows you to easily create custom accompaniments for drums, putting your drumming at the center of the band. In addition, you can also use 10 built-in demo songs or any MIDI song that you’ve already purchased from Yamaha MusicSoft by using iTunes File Sharing. Register Song Beats with Yamaha, and your first song is free!
DTX700 Touch
DTX700 Touch app Allows you to easily and intuitively Customize your kit with quick access to editing and layering. Fine tune your sounds with The EQ and add filters with a simple touch and drag. Download free drum kits from YamahaDTX.com or back up a kit or the whole module with an iOS device.
NI has released their smallest, most portable controller ever!
Native Instruments Komplete Kontrol M32 Features:
Micro-size keyboard controller with 32 keys for all your virtual instruments and effects
Affordable entry point into the NI world
Synth-action, custom NI micro-keybed
Informative OLED display for at-a-glance navigation
8 touch-sensitive control knobs
2 touch strips for intuitive expression
4-directional push encoder for one-handed sound browsing and project navigation
Tag-based preset browsing via the Komplete Kontrol software lets you find sounds quickly and hear instant previews
Smart Play lets you stay in key with over 100 scales and modes, play chord progressions and arpeggios with single keys, or map any scale to white keys only
Pre-mapped control of Komplete instruments and effects, plus hundreds of Native Kontrol Standard (NKS) plug-ins from leading manufacturers via Komplete Kontrol software
Expand your library with loops and samples from Sounds.com
Full VSTi and VST FX support
Deep integration with Maschine software
Intuitive control over Logic Pro X, GarageBand, and Ableton Live
TRS pedal input, assignable to sustain
USB 2.0 bus powered
Can be used as a generic MIDI controller
Software bundle included
Comes with all the software you need to get started making music Included software:
As one of the inventors the Musical Instrument Digital Interface, Roland has continued to push the boundaries of the now 36-year old protocol(!) by continuously developing MIDI-based applications which bring totally new creative opportunities to musicians. One such application is the Roland AE-05 Aerophone GO, a unique digital wind instrument which uses MIDI (and Audio) over Bluetooth to dramatically expand the playing experience.
Connecting to a compatible iOS or Android mobile device using Bluetooth allows the Aerophone GO to interact with a range of apps including Roland’s own Aerophone GO Plus and Aerophone GO Ensemble.
With Aerophone GO Plus, a player gains 50 new sounds triggered by MIDI over Bluetooth and can jam along to their favorite songs from their smartphone. In addition to an integrated metronome, the app also allows for customizing the connected Aerophone to suit the player’s technique, with all changes being communicated by MIDI over Bluetooth.
A second app, Aerophone GO Ensemble, connects up to 7 players with a single mobile device for group performance using a common bank of sounds, all facilitated by MIDI over Bluetooth. Whether the application is a lesson with teacher, a duo performance, or a complete ensemble, MIDI over Bluetooth supports a unique wireless playing experience that would have been difficult to imagine 30+ years ago!
Not only the volume but also the sound itself is dynamically affected by the force with which you blow into the mouthpiece and the strength with which you bite it, providing a natural and richly expressive sound.
by Roland
The Aerophone has tons of internal sounds and built-in speakers, but it is also a great MIDI controller. Here are some of the parameters you can control on the Aerophone AE-10. The Bite Sensor can control pitch and vibrato. The strength of your breath effects not only volume, but other parts of the sounds
Recently Ableton announced a free update to Live – Version 10.1
There were a number of workflow improvements , but one of the major new features is the Wavetable synthesizer now supports user wavetables. This allows you to import any wavetable or sample and use it as an oscillator.
Check out this Youtube video of everything that’s new in Live 10.1.
Wavetable synth architecture
Wavetable has a dual-oscillators plus a sub-oscillator this feeds into a 2-pole lowpass filter with five different types of resonant multimode filters available for each of its two filters: Clean , OSR (based on the Oscar), MS2 (a model of the Korg MS20), PRD (based on the Moog Prodigy) and SMP (a variation of the Sallen-Key topology). The MS2, PRD, SMP, and OSR modes are switchable between lowpass and highpass, with variable Drive for adding grit.
There are tons of preset wavetables already organized into categories- Basics, Collection, Complex, Distortion, Filter, Formant, Harmonics, Instruments, Noise, Retro, and Vintage. You can pretty much guess what is in the Presets from the category names.
Wavetable synthesis was used in Ensoniq, Korg, PPG and many other synthesizers. It can also do FM-like synthesis.
Wavetable synthesis is fundamentally based on periodic reproduction of an arbitrary, single-cycle waveform.[5] In wavetable synthesis, some method is employed to vary or modulate the selected waveform in the wavetable. The position in the wavetable selects the single cycle waveform. Digital interpolation between adjacent waveforms allows for dynamic and smooth changes of the timbre of the tone produced. Sweeping the wavetable in either direction can be controlled in a number of ways, for example, by use of an LFO, envelope, pressure or velocity.
by Wikipedia
FM: This mode applies an FM modulator to the wavetable, with visual feedback so you can see the results. In this mode, the two adjustable parameters are tuning and amount.
You can achieve familiar FM effects by starting with the Sines 1 table in the Harmonics category (with a wave position of zero; pure sine), then adjusting the modulation amount parameter with an envelope. The tuning hot spots, where the FM effect retains harmonic coherence (without dissonant artifacts), are -100%, -50%, 0, 50%, and 100%. These correlate with ratios of 0.25:1, 0.5:1, 1:1, 2:1 and 4:1, respectively. Between those values, the Sines 1 sine wave is a fantastic resource for organic bell and mallet textures. Because FM is more controllable with simple carrier waveforms, complex wavetables will yield results that are more unpredictable.
by Ableton’s Lead preset designer and soundteam member Huston Singletary
Wavetable’s enevelopes give you temporal control over the shape of the sound. Envelop 2 is a very typical acoustic sound that might be used for a Piano. Envelope 3 is a very short percussive sound.
One of my favorite techniques is to apply velocity to envelope 2 or 3’s peak parameter, which serves to tie that envelope’s modulation amount to the impact of hitting a key or Push pad.
by Ableton’s Lead preset designer and soundteam member Huston Singletary
Of course Wavetables really come alive when you move through the single cycle wave forms which creates timbral changes. The Prophet VS and PPG were some vintage synths that really showed these capabilities off.
One of my favorite techniques for adding vintage animation to our wavetables is to modulate the PW parameter gradually for only one oscillator with a very slow triangle or sine LFO playing against a second oscillator, with Osc 2’s PW base value set to none or its FM amount slightly raised.
by Ableton’s Lead preset designer and soundteam member Huston Singletary
Ableton of course added other features to Ableton 10.1 including a Channel EQ.
There is a new Delay effect with both a the Simple Delay and Ping Pong Delays with controls for Jump, Fade-In, and Pitch.
New automation features
Musicians get a palette of automation shapes to choose from, as well as the ability to stretch and skew automation, enter values with the numerical keypad, and easier access to clip modulation in Session View. Live now also detects curved movements inside automation and can merge multiple breakpoints into C- and S-shapes.
New in Live: Explore a broader palette of sounds with a new synth, Wavetable. Shape your music with three new effects, Echo, Drum Buss and Pedal. Edit multiple MIDI clips from a single view and never lose a great idea again, with Capture MIDI.
Jordan Rudess of Dream Theater is bringing his KeyFest experience back to Sweetwater! With three days of jamming alongside, hanging out with, and learning from Rudess and guests David Rosenthal (Rainbow, Billy Joel, Cyndi Lauper) and Otmaro Ruíz (solo artist, John McLaughlin, Abraham Laboriel), KeyFest is an event no keys player should miss.
Call (260) 432-8176 x1993 to register.
MEET THE ARTISTS
JORDAN RUDESS
Jordan Rudess, best known as the keyboardist / multi-instrumentalist for platinum-selling, Grammy-nominated prog rock band Dream Theater, began his training at the world-renowned Juilliard School of Music at the age of nine. Since then, he has gone on to a distinguished and diverse career, gaining fans and recognition the world over, not to mention being voted Best Keyboardist of All Time (Music Radar magazine).
In addition to playing in Dream Theater, Jordan has also worked with a wide range of artists, including David Bowie, Enrique Iglesias, Liquid Tension Experiment, Steven Wilson, and the Dixie Dregs, among others. And Jordan’s interest in state-of-the-art keyboard controllers and music apps has also led to a successful career with his app development company, Wizdom Music. For more: jordanrudess.comwizdommusic.com
DAVID ROSENTHAL
Few musicians have achieved the broad-based success that David Rosenthal has earned as a musical director, keyboardist, synthesizer programmer, producer, orchestrator, and touring professional. Since graduating from Boston’s prestigious Berklee College of Music, David’s talents have been continually in demand with many of the most prominent artists in the world, including his long tenure as Keyboardist and Musical Director for Billy Joel, plus work with Bruce Springsteen, Elton John, Ritchie Blackmore and Rainbow, and Cyndi Lauper.
Besides recording and touring, David also continues to show a strong commitment to educating young musicians at such prestigious music schools as Berklee College of Music, Musicians Institute, and Full Sail University. Accordingly, Berklee has honored David with its Distinguished Alumni Award for Outstanding Achievements in Contemporary Music, and he was voted Best Hired Gun in Keyboardmagazine’s readers’ poll.
OTMARO RUIZ
Known for his versatility and virtuosity, Otmaro Ruíz is considered one of the most important jazz pianists in the scene today. With an intense musical career filled with concerts, workshops, and recordings worldwide, Otmaro has earned multiple Grammy nominations and awards, a Lifetime Special Award for International Exposure from the Venezuelan National Artists Institute (for outstanding career in a foreign country), and even an Honorary Doctorate Degree in Musical Arts from Shepherd University.
The long list of renowned musicians with whom Otmaro works constantly confirms his versatility. Among these amazing artists are John McLaughlin, John Patitucci, Jing Chi, Frank Gambale, Peter Erskine, Dave Weckl, Robben Ford, and Vinnie Colaiuta, making it easy to see why he is regarded as one of the most sought-after keyboardists in the world today.
Yamaha originally launched the Soundmondo website and mobile app in 2015 for the reface line of keyboards. It was one of the first major website to utilize Web MIDI.
Connect your reface keyboard to your computer, iPAD or phone, launch Chrome as your browser and you can browse sounds shared by other reface owners, You can create and share your sounds with people around the world.
There are over 20,000 free reface sounds available online.
“Soundmondo is to sound what photo-sharing networks are to images.It’s a great way to share your sound experiences and get inspiration from others.”
by Nate Tschetter, marketing manager, Synthesizers, Yamaha Corporation of America.
Yamaha has since expanded SoundMondo to include other Yamaha keyboards including the Montage. MODX and CP88/73 stage pianos.
So exactly how does social sound sharing work? Well, it’s actually pretty simple. You select your instrument and then you can browse by tags so for example all the sounds that have the tags 2000s, EDM and Piano.
Select the sound and it is sent from the Soundmondo server to your browser and from your browser to your keyboard where you can play. If the synth or stage piano can store sounds, you can store the sound locally on your keyboard. Using the SoundMondo iOS app, you can create set lists and organize your sounds for live performance.
When Yamaha launched Soundmondo compatibility for Montage they produced 400 MONTAGE Performances, including content from the original DX ROM Cartridges, special content from Yamaha Music Europe and 16 original Performances from legendary synthesizer sound designer Richard Devine.
You can see Richard’s performance using the Montage and Richard’s modular setup at Super Booth 2018.
We’re in a golden age of sampled instruments; these days, you can find realistic-sounding samples of everything, including drums. Back in the day, programmed drums sounded artificial and mechanical. Today, drums only have to sound that way if you want them to — and that sound is perfect for certain tracks! But assuming you want realistic-sounding sampled drums for your productions, here are six tips on how to program your drums to sound more lifelike.
1. Sonic Variation
When a drummer attacks the skins, each hit sounds a bit different. He or she hits the drumhead in a slightly different location each time, the sticks hit at different angles, the velocity and power are a bit different, and there are differences between right- and left-hand strokes — even when playing just one drum. All of these things make a difference in the tone that is produced by the drum and contribute to the instrument sounding “live.” To emulate this, make sure that each drum is represented by more than one sample — and while this is critical for preventing “machine-gun drum rolls,” it’s important every time a virtual drum is “hit.” These days, many dedicated drum software instruments will handle mixing up samples automatically. But it can also be done by varying which sample is played based on the velocity of the hit. Many samplers and virtual instruments allow you to set up multiple samples in a round-robin, meaning that the sampler will choose a sample at random for each hit. If your instrument doesn’t support this, you can use an LFO tied to velocity or even to a filter, an EQ, a pitch shifter, or another processor to subtly alter the pitch, tone, or shape of a triggered drum, to add variation.
2. Groovical Variations
Nothing makes programmed drums sound mechanical more than having every hit land exactly on a quantized grid. It’s an instant recipe for rigid, robotic, metronomic drums with no “groove.” Even the best human drummer playing along with a click track has slight variations in timing, coming in slightly ahead of or behind the beat, etc. — and they’ll often do this intentionally to either drive a part forward or to lay it back. A drummer may even push certain drums forward and pull others back at the same time to create a certain groove. If your drum software has a “humanize” function, that may add just the right amount of slight variation that won’t make any hit sound out of time, but will make it just off the grid enough to sound more alive. If there isn’t a humanize function, you can duplicate the effect manually by pulling individual drums or hits a few clicks ahead or behind the beat. Some DAWs and drum softwarealso offer “groove” functions that allow you to apply a particular “feel” to your MIDI tracks. To make this easy, you might want to break the MIDI tracks that drive the drums out to individual tracks (a separate MIDI track for the kick, one for the snare, one for hi-hat, and so on), so you can adjust them independently.
3. The Rare and Unique Three-armed Drummer
Most drummers have two arms and two feet. That means that at any given point in time, they’re only going to be able to play two hand-struck and two foot-struck drums or cymbals. When you’re going for realism, remember that a drummer can’t be playing a two-handed hi-hat pattern at the same time they’re doing a two-handed tom fill. Or playing a double-kick pattern and a pedaled hi-hat pattern together. They can’t strike two toms and a cymbal simultaneously. Having too many instruments attack at the same time is a dead giveaway that a part is programmed and not “real.” Study the patterns and rhythms of real drummers to see how they’re making the most of their four limbs, and make sure you don’t “improve” on a human drummer by programming an extra arm or foot!
4. Moving in Stereo
When you hear drums live or record a live drummer, there is a natural stereo field created by the drum set’s physical positioning. Imagine standing dead center in front of the kit; some of the drums will be to the left of the kick drum, others to the right of the kick drum. If you place each drum in the stereo field the way that a real drum kit is set up, it will add a realistic sense of space to the kit. There are two “perspectives” you can use for this: the drummer’s perspective looking at the kit (for a right-handed drummer, the hi-hat will be to the left, the floor toms to the right) and the audience perspective looking at the kit (a right-handed drummer will have the hi-hat on the right and the floor toms on the left). Either perspective is correct and fine; choose the one you prefer or that works best for your song. Also, if you have stereo overheads on the kit, make sure that the panning within those overheads is matched by the panning of the individual drums in the stereo field (if the hi-hat is halfway to the left in the overheads, the hi-hat track should also be panned halfway to the left), otherwise the instruments will not localize correctly in the speakers and may sound “smeared.”
5. Make Room for the Drums
Real, physical drums have weight and take up space in the room. When you hit them, the sound bounces around the room, creating a natural ambience. That ambience will certainly be picked up if there are “room” mics, but the ambience is also audible in the overhead mics and even in the close mics on the drums. You may not immediately notice it, but if it’s gone, you can tell the difference. Some drum samples include the ambience of the room they were recorded in or allow you to add it into the final mix. For those that were recorded dry, add a very slight amount of a room-type reverb to the drums, not enough to be heard as an effect, but enough to give the drum sounds a sense of space. Note that this is not the same thing as reverb processing you add for effect. You may, for example, include a room reverb for subtle ambience, and still use a gated reverb or a big plate reverb to create a special effect.
Established in 2009, ROLI is creating the future of musical instruments. From next-generation keyboards like the Seaboard to the modular music-making devices of BLOCKS, ROLI instruments are deeply expressive and intuitive to play. They are so versatile that they can sound like anything and be played anywhere.
Technologically advanced touch interfaces make every movement musical on the Seaboard GRAND, Seaboard RISE, Seaboard Block, Lightpad Block, NOISE app, and ROLI PLAY app — part of a growing family of ROLI products that are extending the joy of making music to everyone.
ROLI Song Maker Kit
The ROLI Songmaker Kit is an incredibly high-powered yet flexible music creation kit — and the newest product from ROLI. Combining the expressive power of the Seaboard Block, Lightpad Block, and Loop Block, it gives you everything you need to make a track anywhere.
It’s more than the sum of its parts. Play the Blocks together as an integrated controller, or play each Block by itself. Connect the kit to your favorite software, and map effects and functions to the incredibly responsives surfaces of the Lightpad and Seaboard Block. The huge software package includes Equator, Tracktion Waveform, and Ableton Live Lite (Ableton is also a May MIDI Month platinum sponsor).
Roli and Ableton Live Lite
Ableton Live, the high-powered digital audio workstation (DAW) and sequencer, is a staple in music production. Combining tools for composing, recording, beat-matching and crossfading, Ableton Live’s versatility has made it a favorite of both producers and performers. Now all Lightpad Blocks — including the new Lightpad Block M — seamlessly integrate with Ableton Live. And all Lightpad owners get Ableton Live 9 Lite for free! So you can enjoy the dynamism of Ableton Live and control the DAW in a totally new way.
Brothers Marco and Jack Parisi recreate a Michael Jackson classic hit
Electronic duo PARISI are true virtuosic players of ROLI instruments, whose performances have amazed and astounded audiences all over the world — and their latest rendition of Michael Jackson’s iconic pop hit “Billie Jean” is no exception.
Roli and MPE
ROLI has been an important contributor to MIDI and helped to make MIDI Polyphonic Expression (MPE) a new part of the MIDI standard. Check out this article as MIDI Association advisory board member and MIDI Month Tip contributor Craig Anderton explains MPE and the links to the MPE coverage on MIDI.org.
MIDI Polyphonic Expression (MPE) is a technological breakthrough for today’s musicians, and one of the unique aspects of this emerging category that it works interdependently across hardware and software. Built on the original MIDI specification, MPE-compatible software programs provide new ways to define notes and performance gestures. MPE-compatible hardware controllers offer innovative interfaces that let musicians engage with all of the extra expressiveness facilitated by the software.
One of the biggest recent developments in MIDI is MIDI Polyphonic Expression (MPE). MPE is a method of using MIDI which enables multidimensional controllers to control multiple parameters of every note within MPE-compatible software…
Celemony Melodyne has one foot in audio, but the other in MIDI because the analysis that it runs on audio ends up being easily converted to MIDI data. If you can sing with consistent tone and level, Melodyne can convert your singing into MIDI. The same functionality for monophonic tracks exists in many DAWs.
MIDI data has been extracted from the guitar track at the top, and is now being edited in a piano roll view editor.
This has other uses, too. For example if you’re a guitar player and want a cool synth bass part, you can record the bass part on your guitar, extract the MIDI notes using Melodyne’s analysis (how you do this varies among programs, but it may be as simple as dragging an audio track into a MIDI track), transpose the notes down an octave, and drive a synth set to a cool bass sound. You may need to do a little editing, but that’s no big deal.
Here are some videos on how to do the same thing in our Platinum sponsor’s DAW- Ableton Live.
Audio to MIDI in Ableton
Here is a link to a more detailed article on how to convert Audio to MIDI in three different DAW-Ableton, Cubase and Sonar.
We show you 3 programs (Ableton, Cubase, and Sonar) that will allow you to convert audio to MIDI and exactly how to go about using this very useful feature.
At SXSW 2019, Moritz Simon Geist performed and presented several workshops on using robots and MIDI. His new EP is created completely with MIDI controllers controlling robots he created himself.
A latency control concept for midi driven mechanic robotic instruments
Geist is deeply into MIDI. His blog details a proposal for how to overcome the latency caused by physical movements of robots using MIDI and Cycling 74′ Max.
In 1986 Frank Zappa released his final studio album in his lifetime; for the remaining seven years of his life, he would only release live concert albums. Jazz from Hell is an instrumental album whose selections were all composed and recorded by Frank Zappa. It was released in 1986 by Barking Pumpkin Records on vinyl and by Rykodisc on CD. Zappa won a 1988 Grammy Award for Best Rock Instrumental Performance for this album. .
What is a “Synclavier” ?
The Synclavier was an early digital synthesizer, polyphonic digital sampling system, and music workstation manufactured by New England Digital Corporation of Norwich, Vermont, USA. It was produced in various forms from the late 1970s into the early 1990s. The instrument has been used by prominent musicians.
The original design and development of the Synclavier prototype occurred at Dartmouth College with the collaboration of Jon Appleton, Professor of Digital Electronics, Sydney A. Alonso, and Cameron Jones, a software programmer and student at Dartmouth’s Thayer School of Engineering.
The system evolved in its next generation of product, the Synclavier II, which was released in early 1980 with the strong influence of master synthesist and music producer Denny Jaeger of Oakland, California. It was originally Jaeger’s suggestion that the FM synthesis concept be extended to allow four simultaneous channels or voices of synthesis to be triggered with one key depression to allow the final synthesized sound to have much more harmonic series activity. This change greatly improved the overall sound design of the system and was very noticeable. 16-bit user sampling (originally in mono only) was added as an option in 1982. This model was succeeded by the ABLE Model C computer based PSMT in 1984 and then the Mac-based 3200, 6400 and 9600 models, all of which used the VPK keyboard.
Synclavier II (1980): 8-bit FM/additive synthesis, 32-track memory recorder, and ORK keyboard. Earlier models were entirely controlled via ORK keyboard with buttons and wheel; a VT100 terminal was subsequently introduced for editing performances. Later models had a VT640 graphic terminal for graphical audio analysis (described below)
Original Keyboard (ORK, c.1979): original musical keyboard controller in a wooden chassis, with buttons and silver control wheel on the panel.[10] Sample-to-Disk (STD, c.1982): a first commercial hard disk streaming sampler, with 16-bit sampling at up to 50 kHz. Sample-to-Memory (STM): later option to sample sounds and edit them in computer memory. Direct-to-Disk (DTD, c.1984): a first commercial hard disk recording system. Signal File Manager: a software program operated via VT640 graphic terminal, enabling ‘Additive Resynthesis’ and complex audio analysis. Digital Guitar Interface SMPTE timecode tracking MIDI interface
5
by Wikipedia..
What is interesting for us is the fact that the Synclavier was a very advanced and elaborate midi-instrument which revolutionized the music industry.
After two decades of depending on the skills, virtuosity, and temperament of other musicians, Zappa all but abandoned the human element in favor of the flexibility of what he could produce with his Synclavier Digital Music System.
The selections on “Jazz from Hell” were composed, created, and executed by Zappa with help from his concurrent computer assistant Bob Rice and recording engineer Bob Stone. Far from being simply a synthesizer, the Synclavier combined the ability to sample and manipulate sounds before assigning them to the various notes on a piano-type midi keyboard.
At the time of its release, many enthusiasts considered it a slick, emotionless effort. In retrospect, their conclusions seem to have been a gut reaction to the methodology, rather than the music itself.
by AllMusic
As I am since a few years an avid amateur of making midi based music, I took the challenge to revive some tracks of this groundbreaking album on put them on my youtube channal.
I will present one track here which is made with commercial available DAW’s and midi files which are available on the web.
“G-Spot Tornado” is a musical composition created by Frank Zappa for his album Jazz from Hell in 1986.He thought that the composition was so difficult to play that it could not possibly be performed by a human therefore he initially recorded the song using a Synclavier DMS. Zappa was later proven wrong when the song was performed live on The Yellow Shark. The piece, one of, “Zappa’s most successful Synclavier releases in the tonal idiom…,
Frank Zappa’s music keeps inspiring me since I bought my first Zappa record in 1968 and I was lucky to see him perform on several occasions live on stage.
It’s hard to find an actual Synclavier these days, but you can find information on the Synclavier at Vintagesynth.com and Arturia released a softhsynth reproduction of the Synclavier V in 2016.
The Synclavier V faithfully recreates the elite digital synthesizer/workstation that started it all, powering some of the biggest hits and film soundt…
On May 26, we held the very first MIDI Live! chat with a panel of MPE specialists.
We recorded the session and it is presented here as a podcast.
Listeners were not only able to send in questions via text but were able to actually join the discussion and interact directly with the panelists. Roger Linn demoed his Linnstrument live from his studio in Los Altos.
DIscussions included the differences between the original MPE spec and the final MMA specification, MPE checklists, and test sequences, and the requirements for obtaining an MMA MPE logo that is under development.
We’ve already started planning for the release of the MIDI-CI speciffications so stay tuned to the MIDI Live! channel for future events!
We hope you enjoyed these daily tips during MIDI Month—but that’s not the end of it, because there will be plenty more tips to come when you join the The MIDI Association. TMA is an all-volunteer organization that believes in TMA’s mission: to nurture an inclusive global community of people who create music and art with MIDI. Our strength is our community, and your response has been powering TMA since its inception. We have click rates, open rates, and engagement that are way above the industry average—thank you for your involvement.
MIDI is poised to make some major leaps forward this year. Actually it already has, with MIDI Polyphonic Expression and MIDI-CI having been ratified. However there’s more to come, and this web site is the place to find out the latest advances, how to make the best use of MIDI, become inspired by new possibilities, and share ideas with others.
If you haven’t already joined The MIDI Association, now’s the time. It’s free, and membership provides an all-access pass to the site. Welcome!
In the tip for May 22, we covered how to control effects and virtual instrument parameters with a footpedal hooked into a synthesizer or other controller. But what if you don’t have a synthesizer or other controller…and would rather use a 100 mm fader than a footpedal?
No problem, if you’re willing to do a little soldering. Take a 1/4″ phone jack, and wire the tip to the fader’s wiper (center terminal), the ground to the fader’s terminal that connects to the wiper when the fader is all the way down, and the ring to the fader’s terminal that connects to the wiper when the fader is all the way up.
Wire up a phone jack to a long-throw fader, and you can have fader control over MIDI parameters.
If you do have a keyboard or similar MIDI controller, you can use the fader instead of an expression pedal by patching a stereo cable between the fader jack and an expression pedal jack. A more versatile option is to use MIDI Solutions’ Pedal Controller, because you can program it to output any controller number (as well as aftertouch , pitch bend, or system exclusive), and alter the potentiometer’s curve to, for example, have a linear potentiometer give a logarithmic response.
Be a slave to the mouse no more—find out how some serious hands-on control can add more expressiveness.
MIDI has lasted over 30 years, so the MIDI data you entered in a sequencer back in the 80s can still drive today’s virtual instruments (which is pretty amazing, come to think about it). However virtual instruments, computers, and operating systems don’t have, shall we say, quite the same kind of longevity. You’ll be reminded of this when Steven Spielberg calls to say he heard your great song by accident, wants to use it as the theme song for an upcoming blockbuster, and could you please make a few tweaks to the mix…but when you open the project, you see “Plug-in not found.”
Ooops. And then you find out that the plug-in was never updated, it works only with Mac System 7, you lost your authorization code, and the company that made it went out of business years ago. Double oops.
And that’s why it’s a good idea to render your MIDI-driven tracks into audio files. Although you can’t totally future-proof a project, the odds are extremely good that programs of the future will be able to read WAV of AIF files.
The MIDI track on the bottom has been rendered to create an audio version above it
Rendering usually just involves selecting the MIDI track, then choosing an option like “bounce” or “transform to audio.” Now you’ve captured your instrument as an audio file. As an added bonus, you can now save the instrument preset and then delete the instrument so it no longer takes power from the CPU. However, leave the MIDI track because it requires virtually no CPU power. If you later decide you need to edit the part, re-insert the instrument, call up the preset, and re-do the part.
The way most keyboard players add vibrato is to turn up the mod wheel, and inject some LFO to change the oscillator pitch periodically. That’s fine, but consider guitar players—they add vibrato by moving their fingers on strings, which gives a more human quality than using an LFO.
So, try your hand (get it?) at doing vibrato with your fingers instead of using the LFO. This also frees up the mod wheel to do other, perhaps more interesting changes (see the tip from May 9, “Get Imaginative with the Mod Wheel.”
Here’s what finger vibrato looks like after it follows an upward bend.
For the most realistic guitar-style bending, remember to bend up, not down—strings can only bend up, unless you’re using a vibrato tailpiece that can shift the pitch up or down.
There’s more to life than audio echo—like MIDI echo. Although the concept of MIDI echo has been around for years, early virtual instruments often didn’t have enough voices to play back new echoes without stealing voices from previous echoes. With today’s powerful computers and instruments, this is less of a problem so let’s re-visit MIDI echo.
It’s simple to create MIDI echo: Copy your MIDI track, and then drag the notes for the desired amount of delay compared to the original track. Repeat for as many echoes as you want, then bounce all the parts together (or not, if you think you’ll want to edit the parts further).
The notes colored red are the original MIDI part, the blue notes are delayed by an eighth note, and the green notes are delayed by a dotted-eighth note. The associated note velocities have also been colored to show the velocity changes for the different echoes.
But wait—there’s more! You can not only create polyrhythmic echoes, but also change velocities on the different notes. The later echoes can have different dynamics, but there’s also no law that says all the changes must be uniform. Nor do you have to follow the standard “rules” of echo—consider dragging very low-velocity notes ahead of the beat to give pre-echo. There are many, many possibilities with MIDI echo…check them out.
ReWire is a software protocol that allows two (or sometimes more) software applications to work together as one integrated program. For example, suppose you wish your DAW of choice had Propellerhead Reason’s roster of way cool virtual instruments, but you don’t want to learn a different DAW. No problem: use ReWire with your DAW, and get Reason into the mix.
ReWire requires a client application (also called the synth application) that plugs into a ReWire-compatible host program (also called the mixer application) such as Cakewalk, Cubase, Digital Performer, Live, Logic, Pro Tools, Samplitude, Studio One Pro, etc. In the host, you’ll have an option to insert a ReWire device. The process is very much like inserting any virtual instrument, except that you’re plugging in an entire program, not just an instrument. You usually need to open the host first and then any clients, and close programs in the reverse order. You won’t break anything if you don’t, but you’ll likely need to close your programs, then re-open them in the right order.
ReWire sets up relationships between the host and client programs.
Here’s how the client and host work together.
The client’s audio outputs stream into the host’s mixer.
The host and client transports are linked, so that starting or stopping either one starts or stops the other.
Setting loop points in either application affects both applications.
MIDI data recorded in the host can flow to the client (excellent for triggering soft synths).
Both applications can share the same audio interface.
Rewire is an interconnection protocol that doesn’t require much CPU power, but note that you’ll need a computer capable of running two (possibly powerful) programs simultaneously. Fortunately most modern computers can indeed handle ReWired programs, so find out for yourself what this protocol can do.
Sometimes you hit notes you don’t want to hit, particularly if you’re playing MIDI guitar or some other alternate controller (although this tip is most relevant to MIDI guitar, even with keyboards you may end up brushing against some keys accidentally and creating notes you don’t want). Here are some ways to clean up your data stream.
Delete pressure data. Your controller may generate pressure (aftertouch) and your sequencer might record it…but does your synth preset respond to it? If not, the pressure data is just taking up space. If you didn’t filter it out on the way in, delete it now.
Short note glitches. Sometimes you’ll find notes with extremely short durations, and you have no idea how they got there. You’ll usually find these because you experience some kind of problem during playback, but can’t see the notes because they’re so short. So, use your sequencer’s data filtering option (it’s called different things in different programs, like Logical Edit, Find and Replace, etc.) to select only notes shorter than a certain number of ticks—the best number depends on the sequencer’s resolution, but it’s a pretty safe bet notes with durations shorter than 10 ticks aren’t intentional).
Cakewalk’s Deglitch menu weeds out notes, velocity, and duration that don’t meet particular characteristics.
Abnormally low velocities. Just as some “ghost” notes have unusually short durations, some will have unusually low velocities. Again, use whatever feature your software offers to remove all notes with velocities under 5 to 10.
In the days before click tracks, tempos varied because musicians are humans, not crystal-controlled clocks. However, these changes were far from random. While researching an article for Sweetwater’s inSync web publication, I analyzed the tempo changes for several hits from the past that didn’t use a click track and noticed a common element of most songs: the tempo would accelerate up to a crucial point in the song, then decelerate during a verse or chorus. This type of change was repeated so often, in so many songs I analyzed, that it seems to be an important musical element that’s almost inherent in music played without a click track. It makes sense this would add an emotional component that could not be obtained with a constant tempo.
As one example, here’s what the tempo looks like for the Beatles “Love Me Do.” Their tempo variations are quite premeditated.
While the tempo changes in the Beatles’ “Love Me Do” may appear random, they follow a pattern.
Note the dramatic pause at “so please, love me do” around measure 16 and again at 49, and the natural increase in tempo when it went into the “Love, love me do” verse. They also sped up a bit over the course of the track, which happens a lot in songs recorded without a click track.
If you start a song with MIDI tracks, it’s easy to experiment with tempo variations because the sound of the instruments won’t change. Once you’ve nailed a good feel for the tempo, then you can start adding audio tracks that follow the tempo changes.
Sometimes you don’t need an external, dedicated MIDI controller—the one on your favorite synth may be all you need, and the synth even has built-in sounds. The keyboard usually feeds data to the synth’s MIDI out, but also to its internal sounds (called “local control.”) But if your sequencer echoes its interface’s MIDI in to the interface’s MIDI out, then the MIDI data from your synth will re-enter your synth’s MIDI in and cause “double triggering” because both the keyboard and the interface’s MIDI out trigger the same notes. To prevent this, disable the synth’s local control (typically a synth setup or preference option). Or, create a track in your DAW that transmits a value of zero on continuous controller 122, which turns off the synth’s local control.
Enter your text here …
Turning off local control is important if you’re using a synthesizer as a controller for your host software.
Another gotcha is that some sequencers try to be considerate—they default to sending a local control off command to prevent double-triggering, because they assume that if you’re using a synth as a controller, you don’t want double triggering. But this means that if the sequencer isn’t echoing the MIDI input to the output, you won’t hear the synth when you play until you turn on local control—or boot up your sequencer.
Some virtual instrument and effects parameters just cry out for footpedal control—too bad you don’t have a pedal that outputs MIDI data…or do you?
If you have a keyboard synthesizer or controller, it will probably have an expression pedal jack. The standard MIDI controller for expression is controller #11, and unless your keyboard or controller is really old, the odds are good that plugging an expression pedal into the pedal jack, then moving the pedal, will send controller #11 messages out the keyboard or controller’s MIDI out. A floor multieffects for guitar that has a pedal may also transmit controller messages.
The Yamaha FC-7 Expression Pedal can control more than just parameters inside a hardware synthesizer.
Assuming the target parameter you want to control has MIDI Learn, enable it (often done by right-clicking or shift-clicking on a control and choosing MIDI Learn), wiggle the footpedal, and now the parameter has “learned” to respond to your footpedal motion. Note that if another parameter is already controlled by controller #11, you’ll probably want to click on it and call up “MIDI Forget.”
You can “humanize” sequences that have been quantized too rigidly by tweaking the start times for individual notes or phrases. Ignore any menu item called “humanization,” because this usually just adds randomness—that’s not what makes timing human (unless the human in question had too much to drink). Instead, alter note timings manually or use a “slide” editing function; note that any “snap” function needs to be turned off, and these changes should be subtle.
Mixcraft’s MIDI editing can move selected notes early or late, as well as add randomization.
For example:
Jazz drummers often hit a ride cymbal’s bell ahead of the beat (earlier) to “push” a song.
Rock drummers frequently hit the snare behind the beat (later) for a “big” sound.
For electronic dance music, move double-time percussion parts (shaker, tambourine, etc.) slightly ahead of the beat for a more urgent feel.
With tom fills, delay each subsequent note of the fill a tiny bit more. This can make a tom fill sound gigantic.
If two percussion sounds or staccato harmony lines hit on the same beat, try sliding one part ahead of or behind the beat to keep the parts from interfering with each other.
Move a crash cymbal ahead of the beat to highlight it, or behind the beat to have it mesh more with the track.
If a bass note and kick hit on the same beat, delay the bass slightly to emphasize the drum (hence the rhythm), or advance the bass a tiny bit to emphasize melody.
REX files chop digital audio into “slices,” each of which is associated with a MIDI note. Playing a MIDI note triggers its associated slice, which is why REX files can follow tempo variations—slices can trigger at a faster or slower rate as you speed up or slow down a MIDI sequence. However, what really makes this fun is that you can also re-arrange the MIDI notes in a different order to trigger slices at times other than their original timings, or transpose the notes to trigger a different slice than the one the MIDI note would normally trigger.
In Propellerhead Software’s Reasons, the slices driving a rhythm guitar part have been moved around in a phrase’s final measure to create a musically useful variation.
This kind of slicing and dicing is particularly effective with drum loops, because owing to the nature of REX files, each slice tends to be a single hit consisting of one or more drums. If you move these hits around, you can create a totally different drum pattern.
Part of making MIDI guitar feel “right” when triggering synths has nothing to do with the guitar and its tracking, but with editing the synth presets so that they’re guitar-friendly instead of being optimized with keyboards in mind.
Separate channels. The guitar will most likely send data from each string over a different channel. So, use synths in multitimbral mode, where each voice has its own channel. Depending on the synth, the fastest way to do this is to optimize a voice for one string on one channel, then copy over to the other channels.
Polyphony. Set each voice for one-note polyphony. Think about it—with any guitar, you can’t play more than one note at a time on a given string. MIDI guitar feels more realistic when it responds in the same way (and may even appear to track better).
Native Instruments’ Kontakt has six Clavinet voices, set to channels 1-6, for MIDI guitar. Note how maximum polyphony is set to 1.
Legato mode. If there’s a legato mode, consider using it. Then if you slide up the neck, you won’t retrigger a note at every fret along the way…then again, maybe that’s the effect you want.
If you don’t want to program the sounds yourself East West has released a series of sounds specifically programmed for MIDI guitar.
Some MIDI instruments, particularly those from Arturia, include an external input for processing audio signals through the synthesizer’s filter, VCA, and effects modules. That’s cool enough, but of course, what’s even cooler is that is that you can then use MIDI to trigger filter and VCA envelopes, turn filter resonance up high and use a keyboard to “play” the filter frequencies as the audio goes through it, and more—the only limit is the extent to which elements within the synthesizer can interact with the input signal.
Arturia’s Mini V can also serve as a signal processor by feeding audio into the External Input. A volume control (highlighted in red) determines the level of the audio going through the synthesizer.
One of the complaints about “MIDI music” is that quantizing everything to the beat sucks the life out of a song by eliminating the kind of timing variations humans make. But that’s not the fault of MIDI— the problem is the person doing the quantization. So, here are three ways to make quantization more human-sounding.
Quantization strength. Instead of quantizing to the beat, quantize with 50% strength. This moves the note closer to the beat. If the timing still isn’t tight enough, quantize again by 50%. You’ll find that often, notes that are ahead of or behind the beat are intended to contribute feel, but the player isn’t precise enough with the timing—so the timing variations are too “loose.” Tightening up the timing can preserve the intent, but sound less sloppy.
Cubase’s quantize panel includes iQ (interative Quantize), set here for 50% and outlined in red. A little bit of swing has been added as well, but randomization is set to 0.
Groove quantization. This feature allows you to quantize to a humanized groove. For example, someone might have converted the audio from a percussion part played by a human into MIDI data, and you can use that as a template to quantize a percussion part instead of quantizing to the grid.
Swing. Even just a little bit of swing, like a couple percent (like 52% or 2%, depending on how the program chooses to calibrate swing), can add a less rigid, more flowing feel to a piece of music.
Here is a link to more details from Cubase expert Matt Hepworth
Quantizing MIDI doesn’t need to make for a rigid, lifeless performance. Using the advanced MIDI tools in Cubase you can improve the timing and keep the groove s
When you’re songwriting, you want nothing to get in the way of your creativity, and you want as fast a workflow as humanly possible—so for those reasons, you’re better off starting the songwriting process with MIDI rather than recording audio (if not you’re not a keyboard player, even a simple MIDI guitar controller like the Jamstik+ or You Rock guitar will do the job). Here are the two main advantages.
Transposition. You can transpose MIDI instruments quickly, while retaining sound quality. When you’re looking for the right key for your voice, you can find it in seconds.
Tempo changes. There’s a tendency when writing to play a bit more slowly because you’re feeling your way around the chord progressions, lyrics, etc. Once you’ve established the song’s framework, then you can experiment with different tempos until you find one that feels right.
This multi-timbral setup contains 16 different instruments to provide a palette for songwriting.
To get started, my tool of choice is a multitimbral synth like IK Multimedia’s SampleTank, with a preset that contains the kind of instruments needed for songwriting. Then it’s possible to lay down multiple tracks quickly to create the song’s overall shape, which makes choosing the key and tempo just that much easier.
Most people of think of arpeggiation solely in melodic terms, but arpeggiation has additional uses.
General MIDI instruments include drum kits where the top notes are percussion sounds, and many virtual instruments include percussion presets. Setting up an arpeggiator in a random mode to trigger various percussive sounds can create a really cool effect. The wider the octave range, the more instruments the arpeggiator will play—which you may or may not want, if there are some annoying percussion sounds in the mix. Restricting the range, or using a non-random arpeggiator setting, can create a more “compact” set of sounds.
The arpeggiator in Cakewalk by BandLab is generating random arpeggiation over three octaves based on the notes held down to trigger percussion sounds.
This can also work well with multi-sampled instruments. Instead of stacking the multi-samples on one key and triggering with velocity, spread the multi-samples across multiple keys and use an arpeggiator to trigger them. You can end up with some delightful surprises this way. Just make sure that your program is always in record mode, because if a pattern is truly random—good luck duplicating it.
When audio plug-ins entered the mainstream, MIDI plug-ins took somewhat of a backseat because they weren’t the “shiny new toy” in town. However with MIDI’s resurgence, companies are paying more attention to MIDI plug-ins. For example Cubase has always had a great roster of MIDI effects, Ableton Live almost gives parity with MIDI and audio plug-ins, Logic added several in an update, Studio One has Note Effects, Digital Performer has various MIDI processors, and so on.
Top to bottom: Ableton Live Arpeggiator and Scale Constrain, Apple Logic Chord Trigger, and Steinberg Cubase Step Designer.
The great thing about MIDI plug-ins is that they can do non-destructive editing. Suppose you have a MIDI plug-in for quantization; when you lay down a drum part and don’t want to take the time to edit it to perfection, slip a MIDI plug-in into the drum’s MIDI track, and set it for 16th notes. The part will be quantized so you can play along with it easily as you lay down other parts. Once the song has developed sufficiently, then you can go back and do the needed timing edits to make the drum part really shine, and remove the plug-in.
MIDI plug-ins can also do other tricks like arpeggiation, velocity control, chord detection, snapping to scale, and even do “effects” like polyphonic echoes—ignore MIDI plug-ins at your own risk, because they’re really cool.
Cubase has always been one of the most powerful DAWs when it comes to MIDI programming, but did you know you don’t even need a keyboard to create beats and melo
If you think of a keyboard as playing only notes, four or five octaves may be sufficient. However, many virtual instruments (e.g., FXpansion Geist, Native Instruments Kontakt, EastWest’s Play engine, etc.) use MIDI keys not only to play specific notes but also to trigger articulations or variations on a basic sound. If your main USB MIDI controller doesn’t have enough notes, no worries—trade it in for that deluxe 88-note weighted keyboard you’ve always wanted (hey, you only live once). But if you lack the space or finances, add a second USB MIDI controller for doing switching—even if it’s just something like a little Korg plastic keyboard designed for mobile applications. Your sequencer probably won’t be able to merge incoming MIDI streams, but no worries there either: MIDI Solutions’s Merge will merge two data streams to a single output. There are also several DIY circuits for MIDI mergers on the web.
When you need more notes than a single keyboard can provide, merge the data streams from two keyboards with a MIDI Merger.
Jamstik+ from Zivix is an interesting solution for guitarists who want to play MIDI instruments. While it’s not a guitar, it feels mostly like a guitar because it has real strings, a neck, and frets—but it doesn’t make any sound. On the plus side, you don’t have to change or tune the strings. Also, the basic version has only five notes, so it’s more of a “first-position chords” guitar. You can’t go much past a barre G or first position A played as C, and of course, playing leads high up on the “neck” is not possible, although you can transpose the range over which it plays.
Although invented more as a way to learn guitar, the Zivix Jamstik+ can also trigger virtual instruments via MIDI.
Because it’s physically small you’ll need to use the included strap, and it’s a little harder to work your way around the neck than a guitar. However, it doesn’t take long to acclimate yourself and if you want to lay down a MIDI part based on playing rhythm guitar, you’re good to go. Just remember a few tips:
Jamstik+ generates controller data that’s not relevant to what we’re doing. So, in your host of choice you can disable everything except notes to help thin out the data stream.
Glitches really aren’t an issue, because the Jamstik uses infrared sensors to detect when your finger is on a fret. However, you can generate sub-20 ms notes that while not problematic, aren’t needed. Your recording software may have a function that lets you delete all notes below a certain duration or velocity with a couple mouse clicks.
Jamstik+ can work wirelessly with Bluetooth LE MIDI as well as with a wired USB MIDI connection.
For best results with synths, use Jamstik in its multi-timbral mode, so each string goes to its own channel in a multitimbral synthesizer. This not only sounds more realistic, but plays more like a guitar. If your synth has a legato mode, that can give even better results for some types of musical material.
Here is a video on how to use Jamstik+ with Ableton with a free instrument download.
Since Nektar Technology, Inc was founded in 2009, we have been passionate about our mission to bridge the gap between powerful music software and controller hardware. With software continuously evolving, a plethora of instruments and effects have become available. Able to run even on modest computers, music creation has become more accessible to the many and not just the privileged few. The evolution of computer music hardware unfortunately has not matched the progress of software so our mission was born: To create transparent and intuitive tactile products that allows musicians to control and operate music software, as if its hardware.
Impact LX49+ and LX61+ USB MIDI controllers
More Control. More Creativity. More LX+
The Impact LX49+ and LX61+ USB MIDI controllers are jam-packed with intelligent and expressive performance control not even available on many premium products. Ever wanted a controller that hooks up automatically to your DAW? Impact LX+ does exactly that.Nektar DAW Integration custom designed for Bitwig, Cubase, Digital Performer, FL Studio, GarageBand, Logic, Nuendo, Reaper, Reason, Sonar and Studio One takes Impact LX+ way beyond functionality normally offered by a USB MIDI controller keyboard. With Impact LX+ the hard work is done, so you can focus on your creativity.
PACER boosts your creativity by providing hands-free control of your DAW, MIDI guitar soft- or hardware as well as channel and FX switching on your trusted analog amp. All integrated into one rugged and stage-ready foot pedal with 10 programmable LED foot switches, 4 switching relays and connections for up to 4 external foot switches and 2 expression pedals. That’s a lot of switching power right at your feet: With just one press of a button, you can send up to 16 MIDI and relay messages to reconfigure a setup instantly. Step up your pace with this powerful MIDI DAW Footswitch Controller!
Andrew Huang has done a lot with MIDI. His GLORIOUS MIDI UNICORN has 3,313,153 views on YouTube. He invented the hashtag #MIDIFLIP and made YouTube videos on how to make your own own MIDI Controller. He is also a dedicated user of Ableton Live and Push.
Yamaha has been intimately involved with the development of MIDI since the very beginning. We pioneered groundbreaking technology by making the first all digital FM synthesizer, the epoch making DX7. Yamaha synths like the Motif and the Montage have been the standard for touring and studio professionals for the past 2 decades. Recently Yamaha has been innovating with Web MIDI by developing the first social sound sharing community- SoundMondo. At the 2018 NAMM, the MIDI-CI specification initiaitive which was spearheaded by Yamaha was adopted by the MMA paving the way for a new MIDI protocol in the near future. Yamaha makes more MIDI-enabled musical instruments than any other company on the planet.
Montage
Welcome to the new era in Synthesizers from the company that brought you the industry-changing DX and the hugely popular Motif.
Building on the legacy of these two iconic keyboards, the Yamaha Montage sets the next milestone for Synthesizers with sophisticated dynamic control, massive sound creation and streamlined workflow all combined in a powerful keyboard designed to inspire your creativity.
If you liked the DX and Motif, get ready to love Montage.
For keyboardists, music creators and sound designers – reface Mobile Mini Keyboards are reimagined interfaces of classic Yamaha keyboards.
reface CS
Analog modelling synth: simple control, complex sound, endless possibilities.
reface DX
FM synth: from nostalgia to trendsetter with modern control.
reface CP
Electric piano: retro control, classic sound and incredible response.
reface YC
Draw bar organ with rotary speaker.
SoundMondo Social Sound Sharing Site
Soundmondo is a social sound-sharing website and one of the first sites to implement WebMIDI, a W3C (API) pioneered by Google in Chrome. WebMIDI connects MIDI devices to your browser allowing musical instruments to play online synthesizers, as well as save or share sounds with Soundmondo. Because WebMIDI is part of the Chrome, Soundmondo works on Mac, PC, and Android devices. There is also a Soundmondo iOS application.
The reface Soundmondo iOS app lets you store, recall reface Voices on iOS and share them on Soundmondo. Each stored Voice can be rated, named and given a custom image from your photo library.
There are over 10,000 sounds available for browsing and sharing.
The Yamaha Disklavier is one of the most amazing MIDI instruments in the world.
The Yamaha Disklavier E3 combines technology with tradition to open up a whole new world of musical possibilities to explore.The E3’s innovative features help you find your own customized way to relax. When you pick up the remote control, you are instantly ready to enjoy new music over the Internet or listen to an old favorite from your personal CD collection.The E3 also comes with built-in speakers as well as exclusive Yamaha CD’s, allowing you to start listening right away without a complicated set-up process. And no matter where you live, when you connect the E3 to the Internet, you gain access to a treasure trove of musical performances from the finest musicians in the world.
The history of the piano is a history of technological change and innovation, starting over 300 years ago with the escapement action of Bartolomeo Cristofori and continuing with knee levers, pedals, action modifications, cast iron frame, and so much more. This dynamic history has been the result of the passionate interaction between keyboard players, composers, and instrument makers.
In the 1970s, solenoid-based player systems were added to pianos for the first time. In 1987, Yamaha took that concept to a new level of quality and ease of use by introducing the Disklavier reproducing piano to North America.
The term Disklavier is a clever combination of the words disk (as in floppy disk) and Klavier, the German word for keyboard. At the time that the Disklavier was introduced, recordings were stored on 3 ½ inch floppy disks.
The Disklavier is fundamentally a traditional, acoustic piano with a built-in record-and-playback system. The record-and-playback system and its related features have changed substantially over the years, but one aspect of the Disklavier has remained constant: The Disklavier system has always been offered as a factory-installed system—never as a retrofit for existing pianos
by George Litterst- The History of the Disklavier on the Disklavier Educational Network
Dan Tepfer uses the Yamaha Disklavier and MIDI to create unique compositions
Dan Tepfer is a jazz musician who has developed software to allow him to “improvise” with his computer. When Tepfer plays a note on his Disklavier, MIDI is sent to Super Collider, an open source tool for programming algorithmic music. Tepfer has created different algorithms to augment his playing for example retrogrades to invert whatever he plays, or echoing notes in different octaves. He can even trigger cascades of notes based on harmonic patterns.
For more details on Dan Tepfer’s work, check out these two articles from Engadget and NPR.
Dan Tepfer is an acclaimed jazz pianist and composer who has played venues from Tokyo’s Sumida Triphony Hall to New York’s Village Vanguard. He also has a degre…
Tepfer sees jazz as the pursuit of freedom within a framework — a premise that underlies his work with improvisational algorithms and a Yamaha Disklavier. He unpacks the project in this video.
For many years, Yamaha has sponsored the piano e-competition. Classical pianists from all over the world come to have the opportunity to perform on Yamaha CFX concert grand pianos equipped with state-of-the-art Disklavier Pro recording technology. This system, which was pioneered by Yamaha, is the fusion of the acoustic piano and computer electronics and allows all solo rounds of the competition to be downloaded via MIDI to be enjoyed anywhere in the world. This year Google also joined as piano e competition sponsor and is using the e-competition’s classical MIDI files to train their Music AI engine.
Check out our articles on Google Music AI initatives and on the e piano competition.
Game design is an incredibly exciting creative industry that ties together many different disciplines, including graphical design, audio engineering and coding. Audio specialists in particular have one of the more lucrative positions in the industry, with sound designers earning $51k on average for using systems like MIDI to create epic soundtracks and sound effects. Game designers like John Romero and Gabe Newell have gone on to become genuine celebrities.
It’s not all about cash, of course, and game design is a fantastic vocation that gives near limitless avenues for creative and technical minds. You’ll find yourself working with cutting-edge stuff, whether that’s the latest graphics engine or art direction, or bespoke computer-led sound systems. So – howcan you get into game design? What skills do you need, and how do you apply them?
Qualifications
Before you look down what skills you might have to offer to the games design industry, you need to consider how you’ll find exposure.
There are generally two major ways to get into the game design industry. You can become part of an indie studio, or design on your own (if you’re a jack of all trades) and work hard via platforms like Steam to grab the attention of consumers. You can also use services like DeviantArt to attract admirers of your work, or SoundCloud for audio work. The other, more straightforward route is to opt for college education. There area range of colleges for game design. Some will take on students on an all-rounder education style, whilst others will be hoping to see students with a specific specialization.
Creative Arts
Every game design studio needs high quality creative workers, usually working under production staff or creative leads. What this means is people who have a good imagination and the skills to be able to illustrate, digitally or not, their ideas. This can also mean being able to properly vocalize what they’re imagining, through pitches to the lead designers or written prompts.
Audio designers can come in various formats. There are engineers, who are often employed to work with MIDI interfaces to micro-engineer in important aspects of the overall audio design, such as equalizer optimization and balancing. There are audio designers, too, employed to creatively design new sounds and music for the product. Like creative art, the key here is thinking outside of the box. To take an example from film, did you know the famous TIE fighter engine noise from Star Wars was made by combining an elephant’s call and a car driving on wet pavement?
Programmers
The framework behind every game and the creative design put into it is the coding and framework, used by the engine that the developer has chosen. Computer developing is an incredibly varied and wide-ranging profession that you can find numerous opportunities in and game design is a lucrative and creatively minded one.
Game design is a fantastic profession, full of creativity and the remit to really think outside of the box to create new titles. With lucrative earnings and the ability to tackle the latest technology, it’s a way of staying ahead of the times. If you’re dedicated, you could be a game designer.
MICROTUNING VIRTUAL AND ELECTRONIC HARDWARE INSTRUMENTS: AN OVERVIEW OF FORMATS AND METHODS FOR USING ALTERNATIVE INTONATION SYSTEMS
For those electronic hardware synthesis enthusiasts, as well as computer based musicians and composers who wish to explore the vast expressive possibilities, new harmonies and melodic potentials of using alternative intonation systems in their music creation processes (just intonation, temperaments, non-octave, historical microtunings, etc.), they will inevitably face the complexity of dealing with the different kinds of popular microtuning formats, including: various types of tuning tables, MIDI SYSEX, scripts, etc., required for retuning their hardware and software instruments.
Since there currently are no universal methods for changing the intonation of electronic musical instruments, the task for microtuning ensembles of virtual or hardware instruments to a single intonation system, much less a dynamic intonation environment, can often be a daunting chore for newcomers to the field of xenharmonic and microtonal music composition.
The primary concern of this short article are music software and hardware developers who offer products that feature what is often referred to as full-controller, or otherwise, full-keyboard microtuning, and some of the currently popular methods for changing their underlying intonation to tuning systems other than the well-worn and ubiquitous 12-tone-equal-temperament that has been the defacto standard in Western music since the 19th century.
Essentially, full-keyboard microtuning gives musicians and composers complete, unrestricted control over how the pitches of intonation systems are directly mapped to MIDI Notes on their controllers, and enables mappings that can have less, or greater than 12 notes that repeat across the range of the instrument, as well as allowing the use of systems that have repeat intervals other than the typical 2/1 octave at 1200 cents.
Starr Labs Microzone U-648 Generalized Keyboard
Among the ways that the complexity of microtuning ‘format overload’ may manifest for electronic musicians and composers are as follows:
Buyer beware and be informed: There are a bewildering number of different microtuning implementations…
12 Note Octave Repeating Microtuning
Some virtual and hardware instruments, as well as some DAWs (for example, Alchemy, and the other virtual instruments featured in Apple Logic), may only permit retuning 12 pitches within a 2/1 octave boundary of 1200 cents. It’s important to recognize that although these instruments may be capable of generating a huge range of amazing timbres and sound-designs, this restricted kind of tuning implementation is not capable of full-controller, or full-keyboard microtuning, and therefore has far less utility for serious microtonal and xenharmonic music composition, since their design remains locked into thinking about musical instrument intonation in terms of 12 octave-bound notes repeated across the musical range, and are therefore incapable of being used for intonation systems that feature more or less than 12 notes, or otherwise ones that may not repeat at the interval of octave at all.
Among the many possible examples, 12 Note Octave Repeating Microtuning would prohibit the use of such popular microtunings as Bohlen-Pierce, which divides the 3rd harmonic into 13 equal parts and has a repeat interval of a 3/1 at 1901.955 cents:
Bohlen-Pierce: ED3-13 – Equal division of harmonic 3 into 13 parts
0: 1/1 0.000000 unison, perfect prime
1: 146.304 cents 146.304230
2: 292.608 cents 292.608460
3: 438.913 cents 438.912690
4: 585.217 cents 585.216920
5: 731.521 cents 731.521150
6: 877.825 cents 877.825390
7: 1024.130 cents 1024.129620
8: 1170.434 cents 1170.433850
9: 1316.738 cents 1316.738080
10: 1463.042 cents 1463.042310
11: 1609.347 cents 1609.346540
12: 1755.651 cents 1755.650770
13: 3/1 1901.955001 perfect 12th
The restriction of 12 Note Octave Repeating Microtuning would also preclude the use of the famous Wendy Carlos, Alpha (78 cents step size), Beta (63.8 cents step size) and Gamma (35.1 cents step size) systems, none of which feature a repeat interval of a 2/1 (Tuning: At the Crossroads, Computer Music Journal, Vol. 11, No. 1, Microtonality, Spring, 1987).
There are countless other such examples of historical and contemporary musical instrument intonation systems that would be able to easily illustrate the glaring shortcomings of being restricted to only 12 notes repeating at the 2/1. For musicians and composers to be able to encompass the full range of expression and compositional possibilities of using alternative intonation systems in their music – including, but not limited to, 12 Note Octave Repeating Microtunings – it is advised to support those visionary developers who have implemented full-keyboard microtuning in their instruments. With correctly implemented full-keyboard microtuning functionality, there is no compromise in the way that one may microtune their hardware or virtual instruments.
Xfer Records Serum supports full-keyboard microtuning with the TUN format
The Scala SCL/KBM Specification
Some microtuning implementations may allow retuning instruments with more or less than 12 tones, but provide no uniform method for independently configuring the Key For 1/1 (the MIDI Note on which the microtuning will start) and Reference Frequency (the MIDI Note on which the reference pitch will be mapped, for example, the concert standard of 69.A at 440 Hz). Such is the case with the widespread implementation of the Scala SCL format, where the linear KBM (keyboard mapping) part of the standard has been omitted, a topic which we will explore more in depth ahead.
Native Instruments Kontakt Script Language: KSP
Native Instruments Kontakt, which in theory enables full-controller microtuning, may have encrypted commercial sample libraries that strictly prohibit changing the intonation with its KSP scripting language. Moreover, KSP scripts may be used for sophisticated key-switching, or other such articulation schemes, that might prevent using a full-keyboard microtuning KSP script at the same time. Users of Kontakt should be fully prepared and equipped to program their own KSP scripts, sample instruments and libraries to ensure that they can be fully microtuned, as many developers of Kontakt libraries may not be empathetic to the requirements of microtonal and xenharmonic music composition, and very well may have designed their instruments with no, or extremely limited, ability for full-keyboard microtuning with the KSP language. In other words, they may be entirely ’12-locked’, and incapable of rendering music with intonation systems other than 12-tone-equal-temperament.
Full-Keyboard Microtuning: TUN and MTS Formats
Virtual instruments that can more easily achieve high-precision full-controller microtuning are those where developers have implemented the use of either the TUN or MTS (MIDI Tuning Standard) microtuning formats, which enable saving all of the microtuning mapping information into a single tuning data file that may be loaded directly into the instruments, or in the case of MTS, also be transmitted from the timeline of DAWs that allow transmitting SYSEX, such as for example REAPER and Bitwig.
Microtonal music software developers may have their own unique data-management strategiesfor working with microtuning files.
Some developers may have designed their microtonal software synthesizers and samplers so that microtuning format files may be loaded into their virtual instruments from any directories on the user’s computers, which empowers computer musicians and composers to use and maintain a single centralized global microtuning directory for all virtual instruments, while others may require that the microtuning data files be stored within the plugin’s directory.
Where developers have employed the latter method of requiring users to store microtuning files within the plugin’s directory only, and do not permit loading them from any directory on the computer; this will require that users of the software maintain multiple concurrent microtuning archives for each plugin that uses this method, such as in the case of the excellent u-he virtual instrument line (Diva, Zebra, Bazille, ACE), thereby adding another layer of complexity for working with microtunings and managing the tuning file data.
u-he Zebra 2 Tunefiles directory
Microtuning Formats: A Closer Look
Let’s more closely consider here some of the currently popular methods for microtuning computer music based virtual instruments and some hardware instruments, with this brief overview of their features and benefits:
TUN
The TUN format, invented by visionary developer, Mark Henning, is currently among the most popular and widely used microtuning formats for computer music virtual instruments. He is also the developer of the AnaMark VSTi synthesizer, which was first published with TUN support on February 19, 2003, making it among the earliest VSTi supporting full-controller microtuning tables. The TUN format is an elegant solution for retuning MIDI controlled virtual instruments to alternative intonation systems, because both the MIDI Note Number on which the 1/1 starting note of the microtuning will be placed, as well as the MIDI Note Number on which the Reference Frequency will be mapped, can be freely and independently specified, and is embedded within a single text file that is read by the instrument.
Mark Henning invented the TUN microtuing format and introduced it in his Anamark VSTi in 2003
Pros:
TUN is a high precision microtuning-table text format that includes the scale and MIDI Note mapping information in cents.
Users can specify both the Key For 1/1 (the MIDI Note on which the microtuning will start) and Reference Frequency (the MIDI Note on which the reference pitch will be mapped, for example, the standard concert pitch of 69.A at 440 Hz). Typically these critical parameters are configured and the data exported using dedicated microtuning applications such as Scala, which enables users to save versions of scales with different mappings as required of the music at hand.
Virtual instruments can be fully microtuned using a single TUN file.
Human readable with a text editor.
Cons:
No dynamic, real-time microtuning.
To change to other intonation systems, a new TUN file must be manually loaded by the user for every instrument being used in a composition that requires it.
Some virtual instrument software developers that have implemented the TUN microtuningformat in their products: Big Tick, Linplug, MeldaProduction, Plugin Boutique, Rob Papen, Robin Schmidt, Spectrasonics, TAL Software, u-he, VAZ Synths, Xfer Records.
Scala SCL/KBM
Also popular is the SCL/KBM format from the developer of the versatile Scala microtuning application, Manuel Op de Coul, and is an excellent and flexible text based format that is ideal for archiving intonation systems, which may be expressed in ratios and or cents.
Scala: The musical instrument intonation analysis and microtuning format file creation application by Manuel Op de Coul
Pros:
Virtual instruments can be fully microtuned using both the SCL and linear KBM files. SCL is the part of the standard that contains the intervals of the scale, while the linear KBM part is what determines how the pitches are mapped directly to MIDI Notes on the controller.
Human readable with a text editor.
The Key For 1/1 (the MIDI Note on which the microtuning will start) and Reference Frequency (the MIDI Note on which the reference pitch will be mapped, e.g., the standard concert pitch of 69.A at 440 Hz) can be independently specified and freely changed using the linear KBM (Keyboard Mapping File).
Cons:
No dynamic, real-time microtuning.
To change to another intonation system, a new SCL and a linear KBM file must be manually loaded by the user for every instrument.
An important note regarding the Scala SCL/KBM format
Both the SCL and linear KBM parts of the Scala specification are required to achieve full-controller microtuning and provide users the ability to fluidly change how intonation systems are mapped to their controllers. The reality is that very few developers have correctly implemented both SCL and linear KBM functionality, so where instruments are only able to load the SCL file, without the linear KBM part, it may not always be possible to independently change the Key For 1/1 (the starting MIDI Note of the microtuning) and the Reference Frequency (the MIDI Note on which the reference pitch will be mapped, e.g., the standard concert pitch of 69.A at 440 Hz).
Often, without the ability to load the linear KBM files, such as in the case of the Cakewalk and Image Line virtual instruments, Reveal Sound‘s Spire, and all of the Applied Acoustics VSTi (sadly, their great sounding Chromophone physical modeling instrument included), which use only the SCL part of the Scala specification without the linear KBM, the Key For 1/1 and the Reference Frequency are often treated as one in the same. Other such worst-case-scenario implementations of SCL may map any loaded microtunings to start on middle C (MIDI Note 60.C), and provide no convenient method for changing the mapping of an intonation system at all.
For example, it would be virtually impossible in these virtual instruments to load a Scala SCL microtuning and have the Key For 1/1 start on MIDI Note 60.C, and at the same time have the Reference Frequency on MIDI Note 69.A @ 440 Hz, because, without the KBM file, the Key For 1/1 and Reference Frequency are configured by a single parameter: set the reference note to 69.A 440 Hz, and both the Key For 1/1 and Reference Frequency are mapped on MIDI Note 69.A @ 440 Hz. Likewise, when setting the reference note to 60.C @ 261.625565 Hz, both the Key For 1/1 and Reference Frequency for the microtuning are mapped on 60.C @ 261.625565 Hz. This may be all well and fine for many Equal Temperaments, but with a universe of other types of intonation systems that feature different step sizes and intervals under modal rotation (MOS, just intonation, microtonal-modes-of-limited-transposition, etc.), the SCL-without-KBM microtuning mapping paradigm immediately fails to be able to accurately render microtunings with discrete Key-for-1/1 and Reference Frequency parameters, and will not sound in tune with ensembles of instruments that are microtuned this in this manner.
As we can see, in cases where developers have naively omitted the linear KBM part of the Scala specification, this causes a huge complication for musicians and composers endeavoring to easily microtune ensembles of virtual instruments to a common intonation system, where the requirements of specialized MIDI controller mappings, as well as the music at hand, are that the Key For 1/1 and Reference Frequency need to be independently specified for all of the instruments being used in a particular microtonal or xenharmonic compositional scenario.
The ability to freely map these two parameters of microtunings becomes especially critical when working with various kinds of hexagonal array keyboards, such as the Starr Labs Microzone U-648 Generalized Keyboard, C-Thru Music AXis-64 and AXis-49, as well as grid-based MIDI controllers like the excellent Roger Linn Design LinnStrument and the Novation LaunchPad Pro. It would also be crucial for mapping microtonal tunings to Elaine Walker’s Vertical Keyboards, which feature Halberstadt-style MIDI key-beds with customized key arrangements that are designed to accommodate a wide range of microtonal tunings and ergonomic fingering requirements.
There is hope: Modartt Pianoteq gets it right
Among the most elegant (and correct) implementations of the Scala SCL and linear KBM microtuning format, is found in the excellent physical modeling Modartt Pianoteq virtual instrument, which enables musicians and composers to directly load both Scala SCL microtunings and the KBM Keyboard Mapping files from its user interface.
Modartt Pianoteq 5 correctly implements the Scala SCL and linear KBM specification
Below are a couple of linear KBM file examples to illustrate the microtuning mapping flexibility embodied in the Modartt Pianoteq implementation of the Scala SCL/KBM specification:
60-440-69.kbm| This KBM file would place the Key For 1/1 on MIDI Note 60.C, while mapping the Reference Frequency to MIDI Note 69.A at a frequency of 440 Hz:
! 60-440-69.kbm
!
! Size of map:
0
! First MIDI note number to retune:
0
! Last MIDI note number to retune:
127
! Middle note where the first entry in the mapping is mapped to:
60
! Reference note for which frequency is given:
69
! Frequency to tune the above note to (floating point e.g. 440.0):
440.000000
! Scale degree to consider as formal octave:
0
! Mapping.
52-262-60.kbm| Here the KBM file would place the Key For 1/1 on MIDI Note 52.E, while mapping the Reference Frequency to MIDI Note 60.C at a frequency of 261.625565 Hz:
! 52-262-60.kbm
!
! Size of map:
0
! First MIDI note number to retune:
0
! Last MIDI note number to retune:
127
! Middle note where the first entry in the mapping is mapped to:
52
! Reference note for which frequency is given:
60
! Frequency to tune the above note to (floating point e.g. 440.0):
261.625565
! Scale degree to consider as formal octave:
0
! Mapping.
When advocating for the Scala microtuning format…
Let’s hope that this information will help to illuminate the issues around full-keyboard microtuning with Scala files, as well as to inspire musicians and composers advocating for the Scala SCL format to include the crucial KBM part in their advocacy, and that developers will see how critically important the combination and correct implementation of both the SCL and linear KBM parts of the Scala specification are to serious microtonal and xenharmonic music composition.
Some virtual instrument software developers that have correctly implemented the Scala SCL/KBMformat in their products: Modartt Pianoteq, ZynAddSubFX 2.4.1, amSynth (Linux), UVI.
MTS (MIDI Tuning Standard)
The MIDI Tuning Standard is an ultra-high-resolution specification for microtuning MIDI instruments agreed upon by the MIDI Manufacturers Association, and was developed by visionary microtonal music composers Robert Rich and Carter Scholz. The standard includes both Bulk Dump and Single Note microtuning with a resolution of 0.0061 cent, which essentially divides the octave into 196,608 equal parts. It remains among the best and most flexible real-time microtuning formats available today.
Dave Smith Instruments excellent synthesizer line features MTS full-keyboard microtuning support
Pros:
Virtual instruments can be fully microtuned using single MTS files.
Has been a part of the MIDI Specification since the 1990s.
Single, as well as entire ensembles of virtual instruments, can be fully and dynamically microtuned in real-time within DAWs, or using sequencers, that support the transmission of MIDI SYSEX data to instruments, without the need to manually load new microtuning files by hand in the manner required with TUN and SCL/KBM.
Cons:
The format is MIDI SYSEX data, and therefore is not human-readable.
Some virtual instrument software and hardware developersthat have implemented the MTS microtuningformat in their products: Dave Smith Instruments, E-mu, Ensoniq, Native Instruments, MOTM, Synthogy, Tubbutec, WayOutWare, Xen-Arts, Yamaha.
An important consideration and current reality for the MTS format, is that not all DAWs (Digital Audio Workstations) allow the transmission of MIDI SYSEX to plugins from their timelines, although some, such as REAPER and Bitwig do. Also, the new VST3 format has tragically dropped a lot of the MIDI functionality that was among the most fascinating possibilities of the VST 2.4 SDK, rendering VST3 a huge unknown factor in the future of microtuning virtual instruments.
Xen-Arts IVOR2 (x86) VSTi for Windows features full-keyboard microtuning with the MTS format
We collected up links to the top MIDI educational resources from around the web. Links are embedded in the logos, the pictures and the text in blue. Clicking on a link in a picture takes you to directly to what is pictured.
Berklee Online is the online extension school of Berklee College of Music, delivering access to Berklee’s acclaimed curriculum from anywhere in the world. Berklee Online’s award-winning online courses, multi-course certificate programs, and Bachelor of Professional Studies degree are accredited and taught by the college’s world-renowned faculty, providing lifelong learning opportunities to people interested in music and working in the music industry.
Berklee Online has a huge number of videos on Youtube which are of course free to view.
Berklee Online offers 12 week online courses with instructors. You can take them for collegiate credit or non-credit. As they are actual Berklee College courses with accreditation, they are similar in price to other college courses.
We have partnered with NonLinear Educating to integrate video tutorials on MIDI and music production directly into the MIDI.org site. For less than $17 a month, you can watch as many videos as you want and take certification tests.
Knowledge is power and whether you want to learn more about MIDI, improve your ear training, or increase your skills with the DAW of your choice including Pro Tools, Cubase, Logic, Ableton, GarageBand and many more, MIDI.org’s online videos provide it all.
NonLinear Educating has created a unique business model that donates a portion of the revenue for courses back to the The MIDI Association. As the MIDI Association is an all volunteer, nonprofit organization we take whatever revenue we receIve from the courseS and put it back into improving member services.
So when you sign up for a subscription for video courses, you are not only getting a great deal on some of the best video training on the planet, you are also helping The MIDI Association bring more services to all it’s members.
You can either purchase a monthly or annual pass to view all of the hours and hours of videos or purchase single courses.
Our mission at Dubspot Online is to help you expand your knowledge of music and technology so you can bring your creative visions to life. We offer programs in Music Production, Sound Design, Mixing and Mastering, DJing, and Music Foundations. Our innovative curriculum helps you learn how to create and perform your music using state-of-the-art software such as Ableton Live, Logic Pro, Maschine, and Traktor, with a top-notch faculty of professional DJ/producers, musicians and engineers to help guide your journey
Like Berklee, DubSpot has tons of Youtube videos you can view at no charge.
The MISSION of TI:ME The mission of TI:ME is to assist music educators in applying technology to improve teaching and learning in music. The initial goals stated in 1995 were:
To codify music technology into a cohesive set of standards.
To develop a certification process to recognize the achievement of in-service music teachers in music technology.
To develop an organization, national in scope and focused on the subject of teacher training in music technology.
Here are links to some of the best MIDI film scoring resources. Any text in blue is a link.
Of course, we have some great resources right on our site from Nonlinear Educating. Peter Schwartz has put together not only the core DeMystifying MIDI course, but advanced video courses in MIDI Orchestration.
These courses are designed to get you up-to-speed with the tools and techniques of creating orchestral mockups using MIDI. You learn the techniques of how to make MIDI instruments sound “real”. You see how to set up orchestral templates and get a comprehensive look at compositional tools you can use to create different moods and styles.
You can preview chapters from the videos to see if it has the information you need.
V.I. Control Forum is a virtual-instruments-applications-oriented-community that has some of the most highly experienced post contributors in real time. VI Control Forum has become an important resource for thousands of musicians and composers all over the world. Many of these contributors are experts in their field.
Fantastic MIDI Mockups is a collection of the best orchestral scores, orchestrations, and arrangements that use sample libraries. They are graded by members of VI Control Forum which supports a large community of those using samples in digital orchestrations.
The Film Music Society is a non-profit organization established by professionals in the film and music communities. The FMS promotes the preservation of film and television music in all of its manifestations, including published and unpublished scores, orchestrations, recordings and all related materials. It is the leading organization for film and television music preservation in the world, with members in eighteen countries.
SCOREcast Online is a resource community of working film, television, video game and mixed media music professionals that is dedicated to providing relevant news, commentary, and education for the professional media production community. Its core aim is to inform and educate anyone interested in the re-sophistication of the business and craft of making music for all visual media applications.
Midi Film Scoring is a resource site for TV, film, and game composers who work primarily with virtual instruments and MIDI sequencers. Here you’ll find film scoring tips and tutorials, news about free VST instruments and the best sample libraries, and industry news.
MFS has also put together a pretty comprehensive list of online film scoring resources.
Covering the best and worst of original film and television music since 1996, Filmtracks is a spirited home for comprehensive, humorous, and controversial soundtrack reviews.
Broadjam At Broadjam you can submit your songs to music industry pros who will give you real feedback, get your songs heard by the people that license music for films and TV shows or create a professional website in minutes. No HTML knowledge required.
TAXI helps independent Songwriters, Artists, and Composers get their music to Record Labels, Film & TV Music Supervisors, Music Libraries, Music Publishers, Music Licensing Companies, Ad Agencies, and Video Game Companies.
Songsalive! is a is a grassroots, philanthropic, volunteer managed charity organization run by songwriters for songwriters and is dedicated to nurturing, support, education and promotion of songwriters and composers worldwide.
ASCAP is home to more than 625,000 music creator members across all genres – the greatest names in music, and thousands more in the early stages of their careers.
BMI is the bridge between songwriters and the businesses and organizations that want to play their music publicly. As a global leader in music rights management, BMI serves as an advocate for the value of music, representing nearly 12 million musical works created and owned by more than 750,000 songwriters, composers and music publishers.
With the boom in open-source electronics platform like Arduino and the growth of 3-D printers, it’s become easier and easier to create your own MIDI controller. We wanted to introduce you to some of the people and companies who helped create the DIY MIDI revolution.
Moldover- The Godfather of Controllerism
Moldover is the acknowledged godfather of controllerism. He has been a long time supporter of The MIDI Association and we featured him as a MIDI artist in 2016. He was one of the first people to develop his own DIY MIDI controller.
Shawn Wasabi has 574,651 subscribers and 54,314,415 views on his Youtube channel. He started combining multiple 16 button MIDI Fighters together and combining them with game controllers. Eventually he convinced DJ TechTools to make him a 64 button version of the MIDI Fighter with Sanwa arcade buttons.
Livid Instruments has been at the forefront of MIDI controller experimentation since 2004. They have a number of manufactured products.
minim- mobile MIDI controller
Guitar Wing MIDI controller
Ds1 MIDI controller
But Livid also makes some great components for DIY projects like the Brain V2.
Easily create your own MIDI controller with Brain v2. Brain V2 contains the Brain with a connected Bus Board for simple connectivity. Connect up to 128 buttons, 192 LEDs, and 64 analog controls. Components are easily connected with ribbons cables and we’ve created the Omni Board to allow dozens of layouts with a single circuit board. Brain v2 supports faders, rotary potentiometers, arcade buttons, rubber buttons, LEDs, RGB LEDs, LED rings, encoders, velocity sensitive pads, accelerometers, and more.
by Livid
Links to MIDI.org resources for DIY MIDI projects so you can DO IT YOURSELF!
IntroductionThe Arduino UNO is a popular open-source microcontroller that, in many respects, is a perfect complement to the extensible nature of the Music Instrument Digital Interface (MIDI) protocol. Microcontroller platforms such as Arduino, Teensy
Instructables&nbsp;is a site which hosts DIY&nbsp;projects and&nbsp;is a platform for people&nbsp;to share what they&nbsp;make through words, photos, video and files. We have gone through the many MIDI&nbsp;DIY projects &nbsp;and picked our some of
Companies and products listed here do not imply any recommendation or endorsement by the&nbsp;MIDI Manufacturers Association. MIDI Processing, Programming, and Do It Yourself (DIY) Components These are just examples of such products — we make n
Pens and stylus’ have been employed as computer interaction devices for quite some time now. Most commonly they were used along with peripheral graphics tablets to give a more natural flow to the artist or designer than a mouse could muster. With the release of the Surface Pro hybrid laptop by Microsoft in 2012 they brought a digital pen along to party that could work directly on the screen. It was intended to bridge the gap between the demands of desktop software and the tablet touch screen form factor. In a mouse and track-pad free computing environment how better to access the finer details that your thick fingertips couldn’t manage.
The advantages for the artist become quickly apparent. As the Surface Pro has evolved the graphical power has gotten to the point where it’s a completely competent sketching, drawing and design platform. But there’s another group of artists for whom the digital pen has an awful lot of potential, and that’s the musician.
This is probably most joyously demonstrated by the Windows 10 app Staffpad. Staffpad takes the idea of writing music completely literally and presents you with a blank sheet of manuscript paper and asks you to start writing. Combining the digital pen with hand writing recognition Staffpad is able to interpret your hand written notes into digital MIDI information directly onto a score. It can then be played back through a virtual orchestra. It’s a stunning piece of work and remarkably fluid and creative to use.
Most of us approach music creation in a more sequenced format. The pen has a lot to offer here as well. Entering notes into a piano roll immediately comes to mind, as does the editing of notes, the trimming of clips or moving blocks in an arrangement. Consider drawing in track automation, with a pen rather than a mouse. How much more fluid and natural could that be?
In many ways the pen feels like it’s simply replacing the actions of a mouse – but it doesn’t quite work like that. The Surface Pen works through a combination of technology in the pen and a layer of corresponding technology on the screen. It’s not just touch-screen technology, you can’t take the Surface Pen and use it on another brand of screen, it will only work on Surface products. While that affords the technology a great deal of power it can also trip up software that isn’t able to interpret the technology properly. In many cases the pen works just like a mouse replacement, but in others it can cause weird or no behaviour at all.
When PreSonus first released their new touch-enabled version 3 of Studio One the reaction to the Surface Pen when running on the Surface Pro 3 was to get quickly confused and then lock up. In Cakewalk Sonar, again touch-enabled, there were areas in the software that completely refused to acknowledge the presence of a pen on the screen. Both of those DAWs have far better support for it now. Ableton Live appeared to work with both touch and the pen without any trouble except that when grabbing a fader or knob control the value would leap between the maximum and minimum making it impossible to set it accurately. Adding support for “AbsoluteMouseMode” in a preferences file cured that particular oddity.
Where it’s been most unflinchingly successful is within Steinberg’s Cubase and Avid’s Pro Tools neither of which has expressed any interest in touch or pen interaction – but it simply works anyway. From entering and editing notes to drawing in long wiggly lines of modulation and automation the pen becomes a very expressive tool.
However, for the full immersion that the pen can offer, this tends to mean eschewing the keyboard. When you are leaned in, as I mentioned earlier, having to then pull back to use a keyboard shortcut can be rather jarring and interrupting to your workflow. There’s a certain amount you can do with the on-screen virtual keyboard but it can completely cover what it is you’re trying to edit, so it’s not ideal. This highlights what I see as being the current flaw in the Surface Pen workflow – the lack of a relevant, customisable toolbar.
When editing notes or an arrangement with the pen the ability to do simple tasks such as copy and paste become cumbersome. You can evoke a right-click with the squeeze of a button and then select these task from the list, or you can glide through the menu system but neither of these options are as elegant as a simple Ctrl-C and Ctrl-V. You can quickly extend that to other actions – opening the editor, or the mixer, duplicating, setting loop points there’s a whole raft of commands that are hidden away behind menus or keyboard shortcuts that are annoying to reach with just the pen for input. Adding a simple macro toolbar with user definable keyboard shortcuts would greatly enhance the pen’s workflow. It’s possible to do this with third party applications but it really needs support at the OS level.
This is something Dell have considered with their Canvas touch-screen and digital pen system. They have incorporated floating “palettes” that are little toolbars to access useful keyboard shortcuts. Some DAWs, such as Bitwig Studio and PreSonus Studio One, have fingerable toolbars that can perform a similar function – but something more global would be helpful.
With the release of the Surface Pro (2017) Microsoft have introduced an improved Surface Pen with 4 times the resolution of the previous version. Although more relevant to the artist who draws, it’s interesting to see pen support improving in many DAWs. It’s usefulness is becoming more apparent and if you consider the Dell Canvas and the iPad Pro Pencil, along with the development of the Surface into the larger Surface Studio and laptop form factors, it’s also becoming more widespread.
At the time of writing only one DAW manufacturer has stepped up to push the digital pen into more than just emulating mouse tasks. Bitwig Studio has some special MPE (Multidimensional Polyphony Expression) functionality that allows you to map the pen pressure to parameters on MPE compatible virtual instruments. More on that in another article, but hopefully more creative uses will emerge as this gains popularity.
The digital pen offers many creative opportunities. It unhinges you from the mouse/keyboard paradigm and pushes you into a more natural and fluid way of working. It lacks support in some software and there’s some work to be done on optimising the workflow by combining it with a toolbar, but it offers a different and creative approach to musical computer interaction.
Here’s a video of me reviewing the Microsoft Surface Book for music production which has a lot of pen use and examples in it. There’s plenty more on the YouTube channel:
This is an article that was originally posted on the Cakewalk blog and they kindly gave us permission to excerpt it here on MIDI.org.
Greetings! My name is Mike Green, Music Product Specialist at Zivix, we make the jamstik+ portable SmartGuitar & PUC+ wireless MIDI link. I’m primarily a guitar player, and in my 15+ years of musical composition, MIDI has enabled me to write and record quickly. In full disclosure; I’m a lousy keyboardist. The jamstik+ and Bluetooth MIDI’s availability for Windows 10 has revolutionized what used to be a point-and-click endeavor. Now I can use virtual instruments in Cakewalk’s SONAR software controlled by the jamstik+ digital guitar so I can enter in data wirelessly via Bluetooth MIDI – using the guitar skills that come most naturally to me.
by Mike Green, Music Product Specialist at Zivix
Make Sure Your PC is Bluetooth 4.0 Compatible.
With recent updates in the Windows 10 OS, SONAR’s DAW takes advantage of using Bluetooth 4.0 Low Energy (BLE) to connect Bluetooth enabled MIDI devices. Now, almost all operating systems have this capability, so the performance is only going to get better from here, and more controllers will start “Roli” ‘ing in (haha). Check the specs on your PC (look for Bluetooth in Device Manager) to see if your PC is Bluetooth 4.0 compatible. If not, you can always try various BLE Dongles like this one by Asus.
Connecting is easy
Pair to Windows 10
Open SONAR
Enable your MIDI Device In/Out Check-boxes in Preferences
Select your Soft-Synth
Play!
For more on Sonar, Zivix and BLE-MIDI, check out the full article below and look for links to special deals.
Ikutaro Kakehashi was certainly one of the most influential figures in electronic music in the 20th century. He influenced music and technology throughout his lifetime. He overcame many challenges in his early life to become the head of one of the most influential electronic musical instruments companies in the world, Roland Corporation.
Kakehashi-san was born in 1930 and both of his parents passed away when he was only two years old. He grew up with relatives in Osaka, Japan. During World War II (as was typical during the war), he started working at the Hitachi shipyards in Osaka when he was only 14 years old. There he started to learn about mechanical engineering.
At the end of the war, the Japanese economy was devastated and when Kakehashi-san tried to get into Osaka University, he was rejected because of his poor health.
So he moved to the southern Japanese island of Kyushu when he was 16 and he found a job there as a geographical survey assistant. While in Kyushu he noticed that there were very few resources in early post war Japan for clock and watch repair.
A young Ikutaro Kakehashi in front of his watch shop in Kyushu circa 1946
After being refused an apprenticeship at the watch shop he was working part time (or maybe not wanting to wait 7 years until the apprenticeship would be over!), Kakehashi bought a book on watch repair and taught himself the skills that he needed to set up his own business- the Kakehashi Watch Shop pictured above.
Soon he expanded his skills and business to repair broken radios as well as watches and clocks.
Kakehashi worked to grow his business for 4 years and his plan was to liquidate the business and go back to university as he was still only 20 years old. Just as he was planning to do this, he contracted tuberculosis in both lungs and was hospitalized.
He remained in the hospital for three years with his condition gradually getting worse. Imagine how hard it must have been for the this young man to be stuck in the hospital knowing both of his parents had died of the same disease.
In what was actually a huge stroke of luck Kakehashi was selected as a guinea-pig to test a new drug, Streptomycin. This was an expensive experimental drug and the three years in the hospital had drained away all of the money that Kakehashi-san had saved from his watch company. However the new “miracle” drug soon started working and within a year, Kakehashi was able to leave the hospital and start on his life’s work – changing the face of electronic music forever.
In 1955 he started experimenting with monophonic electronic musical instruments and founded Ace Electronic Industries.
Kakehashi originally attempted to build his own Theremin because he was fascinated by Dr. Bob Moog’s work. But he found the Theremin was difficult to play and decided it probably was not going to be a huge commercial success,
In 1960, Ace Electronic Industries changed their name to Ace Tone,
Ace Tone had several successful products distributed by other companies.
Kakehashi started a relationship with Matsushita and designed an organ that became the National SX-601. Matsushita is one of the largest companies in Japan. They have made products under the Matsushita brand name, the Nationalbrand name and they are known worldwide under the Panasonic brand name. They didn’t adopt the Technics brand name for their line of keyboards until the late 1970s.
Kakehashi-san’s main collaborator at National was Kenji Matsumoto. They remained lifelong friends until Kenji’s death.
In 1964, Kakehashi made his first trip to the NAMM show with the Ace Electronics R1 Rhythm Ace and although he didn’t get any orders he did make connections with some people at the Hammond Organ company and learn about the latest in electronic designs.
People seem to forget that many of the early electronic music pioneers were strongly influenced by home organs of the late 1950’s and early 60s.
Kakehashi-san with the Technics SX601
In 1971 Kakehashi helped Hammond develop the Piper Organ, which was the world’s first single-manual organ to incorporate a rhythm accompaniment unit .
Eventually, with Ace’s success doing almost $40 million dollars a year in business, more investors came into the company until finally Kakehashi was only a minority shareholder in his own company. The majority of shareholders sold Ace to a huge industrial company, Sumitomo Chemical, that had no real interest in electronic musical instruments.
So never afraid to face a challenge head-on, Ikutaro Kakehashi left Ace and in 1972 started a new company with only $100,000 in capital. That company was Roland and the rest is indeed history.
The story of Kakehashi-san and MIDI is covered on in our MIDI History Series, but we wanted to give you the very early history of one of the pioneers of electronic musical instruments and one of the founders of MIDI.
For more information about Kakehachi-san and Roland, check out these informative web pages
The Roland name is almost synonymous with music technology — there can’t be an SOS reader who has not made use of their instruments at some time. As founder Ikutaro Kakehashi approaches his 75th birthday, we begin a journey through the company’s extraordinary history…
This month, we see how Roland survived some tricky times at the start of the 1980s, and how founder Ikutaro Kakehashi ensured that they were well-placed to take advantage of technological developments over the following few years.
Roland made their name with analogue synths and effects, but by the mid-1980s, they needed to go digital to remain competitive. It was a leap into the unknown for the company, but it ushered in a golden era…
Ikutaro Kakehashi, the founder of Roland Corporation, created more than a successful business with a host of important innovations in electronic musical instruments; he has also paid tribute throughout his career to those who first inspired him. Mr. Kakehashi was born in Japan and formed Ace Electronics in 1964 with the goal of improving the electronic organ, following up on the work of his heroes, Mr. Hammond and Mr. Leslie. With the expansion of electronics in the late 1960s, he formed the Roland Corporation, which soon became one of the leaders in the industry. Perhaps the only thing more impressive than Mr.
Music is a visual language, too. Composer Andrew Huang used the piano roll editor in his MIDI sequencer to create sound from a picture of a unicorn. Each dot and line outlining the mythical creature triggers a MIDi note. To make the notes harmonize, Huang had to think both visually and musically. See his creative approach in the video.
Quantization is the process of moving MIDI data (usually notes, but also potentially other data) that’s out of time to a rhythmic “grid.” For example, if a kick drum is slightly behind the beat, quantization can move it right on the beat. Quantization was controversial enough when it was limited to MIDI, but now that you can quantize audio, it’s even more of an issue. Although some genres of music—like electro and other EDM variants—work well with quantization, excessive quantization can compromise a piece of music’s human feel.
Some people take a “holier than thou” approach to quantization by saying it’s for musical morons who lack the chops to get something right in the first place. These people, of course, never use quantization…well, at least while no one’s looking. But quantization has its place; it’s the ticket to ultra-tight grooves, and a way to let you keep a first and inspired take, instead of having to play a part over and over again to get it right—and lose the human feel by beating a part to death. Like any tool, quantization can be used or misused, so let’s concentrate on how to make quantization work for you—and avoid giving an overly rigid, non-musical quality to your work.
TRUST YOUR FEELINGS, LUKE
Computers are terrible music critics. Forcing music to fit the rhythmic criteria established by a machine is silly—it’s real people, with real emotions, who make and listen to music. To a computer, having every note hit exactly on the beat may be desirable, but that’s not the way humans work.
There’s a fine line between “making a mistake” and “bending the rhythm to your will.” Quantization removes that fine line. Yes, it gets rid of the mistakes, but it also gets rid of the nuances.
When sequencers first appeared, musicians would often compare the quantized and non-quantized versions of their playing. Invariably, after hearing the quantized version, the reaction would be a crestfallen “gee, I didn’t realize my timing was that bad.” But in many cases, the human was right, not the machine. I’ve played some solo lines were notes were off as much as 50 milliseconds from the beat, yet they sounded right. Tip #1: You dance; a computer doesn’t. You are therefore much more qualified than a computer to determine what rhythm sounds right.
WHY QUANTIZATION SHOULD BE THE LAST THING YOU DO
Some people quantize a track as soon as they’ve finished playing it. Don’t! In analyzing unquantized music, you’ll often find that every instrument of every track will tend to rush or lag the beat together. In other words, suppose you either consciously or unconsciously rush the tempo by playing the snare a bit ahead of the beat. As you record subsequent overdubs, these will be referenced to the offset snare, creating a unified feeling of rushing the tempo. If you quantize the snare part immediately after playing, then you will play to the quantized part, which will change the feel.
Another possible trap occurs if you play several unquantized parts and find that some sound “off.” The expected solution would be to quantize the parts to the beat, yet the “wrong” parts may not be off compared to the absolute beat, but to a part that was purposely rushed or lagged. In the example given above of a slightly rushed snare part, you’d want to quantize your parts in relation to the snare, not a fixed beat. If you quantize to the beat the rhythm will sound even more off, because some parts will be off with respect to absolute timing, while other parts will be off with respect to the relative timing of the snare hit. At this point, most musicians mistakenly quantize everything to the beat, destroying the feel of the piece. Tip #2: Don’t quantize until lots of parts are down and the relative—not absolute—rhythm of the piece has been established.
SELECTIVE QUANTIZATION
Often only a few parts of a track will need quantization, yet for convenience musicians tend to quantize an entire track, reasoning that it will fix the parts that sound wrong and not affect the parts that sound right. However, the parts that sound right may be consistent to a relative rhythm, not an absolute one.
The best approach is to go through a piece, a few measures at a time, and quantize only those parts that are clearly in need of quantization—in other words, they sound wrong. Very often, what’s needed is not quantization per se but merely shifting an offending note’s start time. Look at the other tracks and see if notes in that particular part of the tune tend to lead or lag the beat, and shift the start time accordingly. Tip #3: If it ain’t broke, don’t fix it. Quantize only the notes that are off enough to sound wrong.
BELLS AND WHISTLES
Modern-day quantization tools, whether for MIDI or audio, offer many options that make quantization more effective. One of the most useful is quantization strength, which moves a note closer to the absolute beat by a particular percentage. For example, if a note falls 10 mlliseconds ahead of the beat, quantizing to 50% strength would place it 5 milliseconds ahead of the beat. This smooths out gross timing errors while retaining some of the original part’s feel (Fig. 1).
Fig. 1: The upper window (from Cakewalk SONAR) shows standard Quantization options; note that Strength is set to 80%, and there’s a bit of Swing. The lower window handles Groove Quantization, which can apply different feels by choosing a “groove” from a menu.
Some programs offer “groove templates” (where you can set up a relative rhythm to which parts are quantized), or the option to quantize notes in one track to the notes in another track (which is great for locking bass and drum parts together). Tip #4: Study your recording software’s manual and learn how to use the more esoteric quantization options.
EXPERIMENTS IN QUANTIZATION STRENGTH
Here’s an experiment I like to conduct during sequencing seminars to get the point across about quantization strength.
First, record an unquantized and somewhat sloppy drum part on one track. It should be obvious that the timing is off.
Then copy it to another track, quantize it, and play just that track back; it should be obvious that the timing has been corrected. Then copy the original track again but quantize it to a certain strength—say, 50%. It will probably still sound unquantized. Now try increasing the strength percentage; at some point (typically in the 70% to 90% range), you’ll perceive it as quantized because it sounds right. Finally, play back that track along with the one quantized to 100% strength and check out the timing differences, as evidenced by lots of slapback echoes. If you now play the 100% strength track by itself, it will sound dull and artificial compared to the one quantized at a lesser strength. Tip #5: Correct rhythm is in the ear of the beholder, and a totally quantized track never seems to win out over a track quantized to a percentage of total quantization.
Yes, quantization is a useful tool. But don’t use it indiscriminately, or your music may end up sounding mechanical—which is not a good thing unless, of course, you want it to sound mechanical!
Instructables is a site which hosts DIY projects and is a platform for people to share what they make through words, photos, video and files. We have gone through the many MIDI DIY projects and picked our some of our favorite projects. To see all the MIDI projects that are available on the site, just click here.
MIDI (Musical Instrument Digital Interface) is a protocol developed in the 1980’s which allows electronic instruments and other digital musical tools to communicate with each other. The advantages of MIDI include: compact -an entire song can be stored within a few hundred MIDI messages (compared to audio data which is sampled thousands of times a second) easy to modify/manipulate notes -change pitch, duration, and other parameters without having to rerecord change instruments -remember, MIDI only describes which notes to play, you can send these notes to any instrument to change the overall sound …
MaxMSP is a visual programming language that helps you build complex, interactive programs without any prior experience writing code. MaxMSP is especially useful for building audio, MIDI, video, and graphics applications where user interaction is needed. This Instructable is part of a 3-part workshop I’m running at Women’s Audio Mission, it’s part one of three Instructables that I’ll be publishing over the course of the next week. (Part 2 – intermediate MaxMSP) (Part 3 – getting Max to talk to hardware) MaxMSP is split into several parts – Max handles discrete operations and MIDI, this is the easiest place to start getting familiar with the tool. MSPdeals with signal processing and audio. ;And Jitter is for graphics rendering and video manipulation. This course will cover Max and MSP. Here are some examples of awesome things you can do with Max.
This Instructable is a continuation of Intro to MaxMSP, a three part workshop I’m teaching at Women’s Audio Mission here in San Francisco.This Instructable build upon the topics discussed in Intro to MaxMSP and introduces some ways to work with audio in Max. Part 3 of the workshop focuses on how to get Max to talk to hardware.; First off, here are some examples of the types of things you can do with audio in Max: Fornant synthesis – using filters to recreate human vocal sounds Audio to MIDI, Granular Synthesis- cutting up a sample into tiny grains and pieces the grains together to make new sounds
Draw your own musical keyboard with pencil on paper, using Arduino and capacitance sensing. Here is a demo and explanation of a finished project: More on this project (and paper circuits in general) can be seen here at the Science of Music blog.
Project goal:Construct a laser triggered midi controller, using standard electric components and a recycled midi keyboard. ;Step 1.Find a recycled midi keyboard / controller. Step 2.Construct a laser triggered switch. Step 3.Connect midi device, measure components (shorts), and test device. You can now play instruments, beats, loops and samples by interrupting the laser.- have fun
A MIDI controller is any piece of equipment that generates and transmits MIDI data to MIDI-enabled devices. In short, if you have buttons on your MIDI controller, you can program those buttons to any sound you want through musical software (ex.: Ableton, Garage Band, etc.). You can also program potentiometers to control effects, volumes, etc..This instructable will show you how to create your own MIDI Controller using Arduino. With a MIDI controller, you are rarely limited with what you can do. There is endless possibilities and endless fun.
Greetings Earth! This Instructable will show you how to build your very own Melodyian – an Arduino-based, 3D-printable robot that can move around, light up, and make music! It’s also a MIDI robot, and can be wirelessly controlled via MIDI over Bluetooth.This robot is part of a larger transmedia production called The Musical Melodyians. The Melodyians are musical aliens who eat music and travel through space to save the universe’s musics. Visit our webular portal to watch videos featuring these Melodyian robots, listen to Melodyian music, read our graphic novel, and more! NOTE: This project is suited for makers with at least an intermediate amount of experience with Arduinos, soldering, general electronics, and at least a basic familiarity with MIDI.
One huge issue in the world of digital music production is keeping that analog warmth (that resonated from reel-to-reel systems and tubes) in modern day digital music. Many swear that analog systems have a sound that can never be replicated by bits, and hope is lost for digital music to match that analog quality. Virtual Studio Technologies (VSTs) have tried to replicate the authentic analog sound, but they (being entirely digital) cannot give you the true sound. In this instructable, I’ll share with you how we can bridge the gap between digital and analog music production by creating a Flame Controlled MIDI Controller using an Arduino micro-controller.Fire is awesome. Flames sway, crackle, and waver which makes them a perfect medium to capture a room’s atmosphere, and ultimately to create a great analog signal. These characteristics are optimal because even when the signal is converted into digital MIDI signals, it will …
This project is a portable, Arduino-powered, grid-based MIDI controller that boots up into a variety of apps to do lots of things with sound. It has 16 backlit buttons, used as both inputs and outputs to give the controller some visual feedback. 2 potentiometers give analog control, depending on the app the pots are assigned to tempo, MIDI velocity, pitch, and scrolling (making the avaible grid space larger than 4×4). An x/y accelerometer and an x/y gyroscope add some playful, gestural control to the device; most of the apps implement a “shake to erase” control and several respond to tilt in various ways. It boots up into 7 different apps (described below), though it has the potential to boot up into 16 total. This device is primarily a MIDI controller, but I’ve also written an app that allows you to pull the button and analog data into MaxMSP and to …
The laser harp is an electronic instrument that is played by blocking laser beams. Several laser beams are produced, and a note is played when one of the beams is blocked by the player, similar to plucking a stick on a real harp. The device must therefore produce a laser beam for each note and also have a sensor for determining when a beam is blocked.I constructed a MIDI laser harp controlled with an Arduino for Spectra, an optics group at Washington University in Saint Louis. This instructable goes over the commercial parts used, design of electronics, mounting parts that were 3D printed, and the frame. This project is also listed on my website with other projects
MIDI bass pedals, similar to pedals organists use to play bass notes, but instead used to play a MIDI synthesizer or sound module, have been popular for the last few decades. In addition to keyboard players, many electric bass players, such as Geddy Lee of Rush, have used them to expand the palette of bass sounds they use. But they can be quite expensive.These were my main costs for building a set of bass pedals:$35 Bass pedals from a Conn organ bought on eBay$35 Shipping for the bass pedals$44 Arduino Mega 2650 R3 controller board$20 Sparkfun MIDI Shield$7 9V 1000 mA AC adapter for Arduino boards_______________________________________________$141 TOTALIn addition to these I used some miscellaneous stuff like wire, solder, contact cleaner, tie wraps and cables I already had. A good place to to get the Arduino components and the MIDI Shield is the Robot Shop.
Create your own antique light bulb organ to add nostalgic ambiance to any midi instrument! 12 light bulbs correspond to the 12 notes in an octave (minus the octave note). The rectangular box unfolds to position the light bulbs vertically for display, while at the same time providing a platform for the keyboard in use. Playing a note on the keyboard directly via midi, or through the usb port illuminates the light bulb for a particular key. Releasing the note, releases the key. Pedal presses are also recognized and keep the bulb maintained. The bulbs can be controlled without a computer by using the front mounted midi port, or via computer which allows for remote control via midi or osc messages. More about that later… The light organ was built for and is currently in use by the band Future Dancing , see the video below to see it in action!
>>>This isn’t quite finished yet as I cocked a bit of the circuit up. I’ll update the instructable and upload a video when it’s sorted<<< I’ve been DJing for about 10 years now, and for the last couple I’ve swapped good old fashioned vinyl for virtual vinyl in the form of Serato. This allows me control mp3’s using timecode vinyl on the turntables. However, like a lot of DJ’s, this led me down the dark path of spending gigs staring at my laptop – aka – Serato Face. I needed to find an interface that would keep my eyes off the screen, but all the ones in my price range weren’t laid out in a way that worked for me. Having seen some great Instructables from other people that made their own arcade style MIDI controllers, a bespoke controller became something I needed to add to my DJ arsenal. However, …
In this Instructable, I will walk you through the process of converting a rescued noise-making children’s toy into an actually useful musical instrument using MIDI! Take a moment to just glance over the titles of the steps in this Instructable and familiarize yourself with the general process, so you know what to expect when you’re complete, and whether or not this Instructable is what you’re looking for. I’ll help you pick out a good toy to rescue, and then guide you through the process I used to successfully hack all of the buttons and switches to make something really cool and useful. We’ll rip out the old, useless guts of the toy and replace it with a cheap microcontroller that is capable of sending and receiving MIDI messages to a PC, which will do the actual sound synthesis for us. I’ll discuss the ins and outs of how to do …
I believe holographic musical instruments will be commonplace in the future, showing up everywhere from schools (for education), to homes (for fun), to media offices (for creativity), and in music studios (for production). The reason is simple: The holographic musical instrument takes a complex process and radically simplifies it: see another demo video here. I’m using the term ‘holographic music’ to mean multidimensional musical structures mapped to 3d surfaces to be decoded through rotational motion. Just as optical holograms modulate light based on the 3D viewing angle, we can modulate sounds based on the relative 3D orientation of an object. This is Part 1 of a 3-part series on a technology that I am calling the Dub Cadet: a holographic musical instrument. Part 1 will discuss theory and technical strategy, Part 2 will provide an arduino-based hardware solution, and Part 3 will explain the programming code that makes it work. …
Fix those little “gotchas” before they make it into the final mix
by Craig Anderton
MIDI sequencing is wonderful, but it’s not perfect—and sometimes, you’ll be sandbagged by problems like false triggers (e.g., what happens when you brush against a key accidentally), having two different notes land on the same beat when quantized, voice-stealing that cuts off notes abruptly, and the like. These glitches may not be obvious when other instruments are playing, but they nonetheless can muddy up a piece or even mess up the rhythm. Just as you’d “proof” your writing, it’s a good idea to “proof” sequenced tracks.
Begin by listening to each track in isolation; this reveals flaws more readily than listening to several tracks simultaneously. Headphones can also help, as they may reveal details you’d miss over speakers. As you listen, also check for voice-stealing problems caused by multi-timbral soft synths running out of voices. Sometimes if notes are cut off, merely changing note durations to prevent overlap—or deleting one note from a chord—will solve the problem. But you may also need to dig deeper into some other issues, such as . . .
NOTES WITH ABNORMALLY LOW VELOCITIES OR DURATIONS
Even if you can’t hear these notes, they still use up voices. They’re easy to find in an event list editor, but if you’re in a hurry, do a global “remove every note with a velocity of less than X” (or for duration, “with a note length less than X ticks”) using a function like Cakewalk Sonar’s DeGlitch option (Fig. 1).
Fig. 1: Sonar’s DeGlitch function is deleting all notes with velocities under 10 and durations under 10 milliseconds.
Note that most MIDI guitar parts benefit greatly from a quick cleanup of notes with low velocities or durations.
UNWANTED AFTERTOUCH (CHANNEL PRESSURE) DATA
If your master controller generates aftertouch (pressure) but a patch isn’t programmed to use it, you’ll be recording lots of data that serves no useful purpose. When driving hardware synths, this can create timing issues and there may even be negative effects with soft synths if you switch from a sound that doesn’t recognize aftertouch to one that does.
Note that there are two types of aftertouch—channel aftertouch, which generates one message that correlates to all notes being pressed, and polyphonic aftertouch, which generates individual messages for each note being pressed. The latter sends a lot of data down the MIDI stream, but as there are few keyboard controllers with polyphonic aftertouch, it’s unlikely you’ll encounter this problem.
Steinberg Cubase’s Logical Editor (Fig. 2) is designed for removing specific types of data, and one useful application is removing unneeded aftertouch data.
Fig. 2: In this basic application of Cubase’s Logical Editor, all aftertouch data is being removed.
Note that many recording programs disable aftertouch recording as the default, but if you enable it at some point, it may stay enabled until you disable it again.
OVERLY WIDE DYNAMIC VARIATIONS
This can be a particular problem with drum parts played from a keyboard—for example, some all-important kick drum hits may be much lower than others. There are two fixes: Edit individual notes (accurate, but time-consuming), or use a MIDI edit command that sets a minimum or maximum velocity level, like the one from Sony Acid Pro (Fig. 3). With pop music drum parts, I often limit the minimum velocity to around 60 or 70.
Fig. 3: Sony’s Acid Pro makes it easy to restrict MIDI dynamics to a particular range of velocity values.
DOUBLED NOTES
If you “bounce” a key (or drum pad, for that matter) when playing a note, two triggers for the same note can end up close to each other. This is also very common with MIDI guitar. Quantization forces these notes to hit on the same beat, using up an extra voice and producing a flanged/delayed sound. Listening to a track in isolation usually reveals these flanged notes; erase one (if two notes hit on the same beat, I generally erase the one with the lower velocity value). Some programs offer an edit function that deletes duplicates automatically, such as Avid Pro Tools’ Delete Duplicate Notes function (Fig. 4).
Fig. 4: Pro Tools has a menu item dedicated specifically to eliminating duplicate MIDI notes.
NOTES OVERLAP WITH SINGLE-NOTE LINES
This applies mostly to bass and wind instruments. In theory, with single-note lines you want one note to end before another begins. Even slight overlaps make the part sound more mushy (bass in particular loses “crispness”) but what’s worse, two voices will briefly play where only one is needed, causing voice-stealing problems. Some programs let you fix overlaps as a Note Duration editing option.
However note that with legato mode, you do want notes to overlap. With this mode, a note transitions smoothly into the next note, without re-triggering an envelope when the next note occurs. Thus in a series of legato notes, the envelope attack occurs only for the first note of the series. If the notes overlap without legato mode selected, then you’ll hear separate articulations for each note. With an instrument like bass, legato mode can simulate sliding from one fret to another to change pitch without re-picking the note.
Craig Anderton is an Executive Vice-President at Gibson Brands, and Editorial Director of Harmony Central. He has played on, mixed, or produced over 20 major label releases (as well as mastered over a hundred tracks for various musicians), and written over a thousand articles for magazines like Guitar Player, Keyboard, Sound on Sound (UK), and Sound + Recording (Germany). He has also lectured on technology and the arts in 38 states, 10 countries, and three languages. This article is reprinted with the express written permission of HarmonyCentral.
Companies and products listed here do not imply any recommendation or endorsement by the MIDI Manufacturers Association.
MIDI Processing, Programming, and Do It Yourself (DIY) Components
These are just examples of such products — we make no warranty re: suitability (or anything else, for that matter) — use at your own risk. If you are a manufacturer and would like to be listed here, please use our Contact Form to let us know.
MIDI Processing Devices and DIY Hardware
CHD ElektroServis: Format converters, vintage retrofits, processors.
Nokia Audio Suite 2.0 enables authoring of SP-MIDI, Mobile DLS, and Mobile XMF content, as well as modeling the sound as played by Nokia terminals.
Crimson’s DLS Tools is a professional editor for DLS Level-1, DLS Level-2, and Mobile DLS/XMF. It is useful for 3GPP (Mobile DLS) content authoring, MIDI sound module IC development, etc.
Eye and I Productions (Voice Crystal®) specializes in General MIDI & custom wavetable design and offers a wide range of GM wavetable sizes; 32KB thru 128KB for Mobile DLS1 & DLS2 applications up to 32MB for professional products. Also provides technical advice on synth functionality, testing & verification..
PolyPhontics is a full-featured DLS and SoundFont® compatible authoring tool for Mac OSX.
Animusic produces innovative music animation by leveraging MIDI data in creating “virtual concerts”. The animation of graphical instrument elements is generated using proprietary software called MIDImotionTM. The technique is analytical, note-based, and involves a pre-process (as opposed to being reactive, sound-based, and real-time).
Feature: Interview with Wayne Lytle, from Animusic
Animusic , as the name implies, is visualized or animated music; the brainchild of one Wayne Lytle, Columbia Music major turned animator. The Animusic DVDs feature computer-animations of preposterous-looking instruments – part fairground organ, part Disney’s Fantasia – that ‘play’ various pieces of music from techno to classical.The brains behind the technology is MIDI. MIDI allows the music and the animations to work, control, and be controlled in sync. Initially Lytle used off the shelf animation and music applications but in recent years the business has grown sufficiently for the company to invest in their own, purpose-designed software, Animotion.
How did this get started?
WL: I got into synths before MIDI, back in seventies, with things like the MiniMoog and all that stuff. By the early 1980s I had been vaguely aware of MIDI and I had already started drawing sketches in notebooks about how I wanted to drive animation from music. Then a friend told me about me about this thing called MIDI, and how you can have synths and computers talking to each other, and suddenly my two worlds merged together. It’s been a happy place ever since.
It wasn’t really until1989 that I first started experimenting with animation being driven from music, and analyzing the MIDI files that I would read in from whichever sequencer I was using.
Are you a keyboard player by nature?
WL: I started with piano and drums. I actually studied classical piano in college but I wasn’t really very good. I was more interested in playing with my bands. I’m not particularly accurate so I’m very grateful for MIDI and sequencing. The keyboard doesn’t get in my way. I ‘think’ the notes and then figure out a way to get them in there. The keyboard may or may not be involved. Thanks goodness I don’t have to play correctly; I just have to think it.
How many people work on projects?
WL: We have three people on our core production staff with a couple of other freelancers who do background projects, skies. We have one full-time software developer and then Dave and I are the core that cook up the ideas, and model them and write the music. On the one hand it’s hard to be small but it’s also hard to be big. On the one hand we don’t have any major communication issues, but at the same time with so few people it’s easy to get burned out.
How long does a project take?
WL: Each of the first two DVDs took three years. We’re working towards being able to do one a year, which is why we developed the software, to make a more streamlined pipeline. We want to get to a point where we’re spending more time playing with the instrument than writing and testing the code.
Are you an animator who plays or a musician who animates?
WL: That’s the tough question I’ve asked myself! In fact worse: Am I a musician who programs or a programmer who’s trying to be a musician? I guess for a long time I wondered how was I going to get good at any of these things. I felt that I needed to just pick one and focus on that. But I didn’t and at this time they do seem to have merged nicely together.
It’s certainly a lot of fun and a wonderful thing to be able to do what’s your passion and for it be able to support a company. We feel very fortunate.
Is there any underlying motive or message behind what you’re doing?
WL: That’s a great question I don’t get asked much. You’ve really touched on something. When we started out it was just a personal passion; what I wanted to do and what I thought about all day long. It certainly started with my own personal interest. But from there when the first DVD was released, and we could see people’s reaction to it, then our motives and purpose began to broaden quite a bit. It did really seem to bring joy to people, to make them smile, make them happy, mesmerize them even. It does seem to affect some people very deeply – kids, and older people, right across the spectrum. That changed our motivation somewhat. There’s even special education. We have Special Needs teachers using our work for education, helping with everything from musical timing, to math, to social interaction.
None of that was expected. It was kind of a happy surprise. Now our motivation to go and kindle those things. We don’t really have an agenda that we’re out to educate per se, but we do want to contribute positively to the electronic media and entertainment world that, at the moment, we see filled with a lot of poor quality, or violent, or plain disgusting content. We’re trying to show a positive side by producing something that’s different and cool, without being silly or corny.
Which comes first – the visuals or the music?
WL: Yes and yes. We’ve done it both ways. With certain animations I have the entire music written sequenced and mixed before even we’ve even thought about animating anything. Other times we have designed and built and tested the graphical instruments before we write any music for them.
In the most ideal sense – and certainly the approach we’re taking with Animusic 3 – it’s really something we try to do in tandem, where we’re working on building the instrument the same time as we’re learning how it will play better. What is it capable of, will it play better faster, or is it better at slow, plodding riffs and basslines? Are there too many notes and do we need to take some out and have it be this 8-note bass machine, or can it handle having 40 different notes? Then, as that evolves, the musical palette evolves and perhaps even stuff we’re doing in the music influences the design of the instrument. Ideally it is a much more integrated process rather than one coming completely before the other.
Have you worked specifically with any of the DAW manufacturers?
WL: Not yet. We try to just focus on the content. Our product is the DVD themselves rather than the tools, although it’s not out of the question that at some point we’d do that. We’ve actually gone away from using commercial sequencers. Not that they’re not great – they get greater and greater as times goes on – but a year or two ago I finally got to the point where I decided to write my own sequencer [MIDIMotion] that could integrate directly with the music animation stuff so that animating and sequencing become a more unified process; where they’re almost one and the same – where you could say you were sequencing the animation or animating the music.
Is MIDIMotion available for sale yet?
WL: It’s been discussed but we haven’t done that yet. It’s quite an undertaking to release a price of software and support it well so up to this point it’s just been an in-house set of software tools.
We have this large animation sequencing program called Animusic Studio and that uses the MIDIMotion engine to do the music animation part of it.
What do you see as the strengths of plug-ins and software sounds as opposed to hardware synths?
WL: I for one am very happy about where we are now. I don’t have a lot of complaints. Nowadays you can have synths that sound fantastic, and that offer total recall, and can be used in as many instances as you like. In the old days you couldn’t just run out and buy six MiniMoogs. Now I can open, say, multiple Reason sessions open and just switch between whichever I’m working on at the moment. And there everything is, configured. Honestly I don’t find myself reminiscing about the old days too often. I’m happy about the new days.
I do however having fond memories of ELP at Madison Square Gardens watching them with mountains of gear and Emerson sticking knives into his Hammonds and stuff. That was fun too.
How does MIDI fit into the modern idiom of music making?
WL: The fact that it’s become as transparent as it has become is in itself an indication of its incredible success. Yes it’s powerful, but you don’t necessarily have to understand it. It’s not like the old modular synth days where if you couldn’t figure out what to hook up to what you were pretty much sunk. Now, a lot of it is going on in the background, invisibly. And even in computer stuff a lot of it is MIDI that may not even make into a physical cable at all but it’s still using the MIDI protocol and a lot of people are completely unaware of that’s what’s there.
I don’t really see much of a problem with that though it would be nice if there was little more evolution because clearly there are some things that are missing. I don’t see a whole lot of activity in pushing stuff forward. One thing I personally wishing for sometimes is the ability to glide from one note to another in some controllable way, sort of like a ribbon controller where you’re not snapping back to the old note. Using a pitch bend you can bend up a fifth, say, and then have to let it snap back and pick up from where you were. But that actually presents some problems graphically when bending, then having to go back and then going to a new note. I have to tell it ignore certain data and pretend like it was someone gliding up to another like on a fretless bass. I have to fight it.
Are the basics of MIDI Still useful for people to learn?
WL: Probably, but how far down do you want to get knowledgeable about your tools? You can get really at using your tools without necessarily knowing too much ‘about’ them. A painter may not necessarily know all about the wood his paintbrush is made from or where the bristles come. That won’t prevent him from being able to do great job at painting. But if you want, you can keep digging down another layer.
Obviously for the people who build music technology tools it’s critical that they understand MIDI, and not consider it to be frozen, never to be enhanced or pushed forward. And it never hurts anyone to understand what’s under the hood.
How much are you prepared to reveal about the processes you use?
WL: Right now we don’t really want to give away the recipe to our secret sauce but at certain point I think it’ll be important to share what goes on behind the scenes. It’s important for us to figure out how to do that so it’s clear, and doesn’t comes across as “wow that’s really complicated. Those guys are really smart.” That’s a reaction we can get sometimes and that’s not really the point. It’s cool what can be done but not as complex as it looks if you explain in right.
We’d like to share this in such as way as people can be empowered by MIDI. Maybe we’ll do that at the same time as we make the software available for people to use. Right now, though, our focus is still very much on the next DVD.
What do you think can be done by the ‘average’ person?
WL: As far as out-of-the-box goes, not a lot, probably. It’s like when synthesizers were operated by guys in lab coats at universities; plugging cords in… Nowadays everything can be done on a laptop. I think that’s how it should develop; it should become that way with music and animation, where people are just dragging and dropping and making their own instruments making new instruments pumping music into it. They don’t necessarily need to know the technology behind it. That’s a target. To put it in people’s hands as a simple and enjoyable process as opposed to crashing and dorking around every few minutes in order to make cool things happen.