Lemonaide is a state of the art AI Company that creates MIDI and audio plugins that generate inspirational ideas (such as melodies and chord progressions in any key). They are partnered with BeatStars, an online marketplace for electronic music producers and beat makers, where they sell access to their Generative AI MIDI models.
In 2024, they partnered with a second company, Sauceware, to release a new plugin called Spawn with audio visualization and a substantially larger collection of sounds. The original Lemonaide app has a small monthly subscription fee, whereas Spawn comes with a one-time purchase fee.
Whether you’re stuck in a creative rut or looking to experiment with new styles, Lemonaide makes sure you never run out of ideas. They’ve achieved a high quality, human sound with rolling chord articulation and catchy singable melodies. The model generates 4 and 8 bar phrases in a single key, appealing to sample-based producers in search of a quick starting point.
Lemonaide’s Fairly Trained AI MIDI models
Lemonaide began with a home-brewed base model called Seeds, with four different moods to choose from. In 2024 they released a handful of fine-tuned AI MIDI models, called the Collab Club, in partnership with Grammy-winning producers and chart-topping artists:
Kato On The Track: Billboard-charting producer with credits for Joyner Lucas, E-40, and Ice Cube.
KXVI: Grammy-nominated R&B/Soul producer with credits for SZA, Future, and DJ Khaled.
DJ Pain 1: Multi-platinum Neo Soul/Hip Hop producer for Nipsey Hussle, 50 Cent, and Ludacris.
Mantra: Pop hitmaker for Rihanna, Dua Lipa, and Bad Bunny.
Each model is designed to reflect the nuances of its genre, giving you access to styles crafted by industry pros.The Collab Club models are royalty-free for selling beats and placements with fewer than 1,000,000 streams. For major placements, Lemonaide provides an easy clearing process to ensure your projects remain hassle-free.
Lemonaide is certified by Fairly Trained, a non-profit initiative certifying companies that use ethically sourced training material in their AI model datasets. This certification aims to protect artists from unauthorized use of their work, addressing concerns about AI-generated content’s origins and its impact on human creativity
This model incentivizes content creators by allowing them to generate income from their creative work while maintaining clear boundaries for when licensing terms come into play. It’s a form of ensuring creators are compensated if the AI-generated content is commercially successful.To learn more about this topic, check out the MIDI.ORG article on ethical AI MIDI software.
Built-in virtual instruments and the DAW bridge
Lemonaide’s original product includes a handful of built-in virtual instruments including space pads, electric keys, pain piano, billboard piano, and synth strings. You can audition MIDI seeds with any of those instruments before dragging them into your DAW. They also provide a DAW bridge to enable playback with virtual instruments from your personal collection.
Their latest product, Spawn, includes hundreds of curated instrument presets designed to work together seamlessly. Here’s a quick summary of what they offer:
Bass: Deep sub-bass, mid-bass, and plucked basslines for rhythmic foundation.
Keys & Piano: Versatile piano, electric keys, and organ sounds for harmonic richness.
Synth: Synth keys, leads, and pads for modern, dynamic soundscapes.
Strings & Mallet: Lush string layers, percussive mallet sounds, and steel drums for unique textures.
Brass & Woodwinds: Bold brass, airy flutes, and shimmering bells for melodic accents.
Guitar & Pluck: Acoustic and electric guitar tones, along with sharp plucks for rhythmic melodies.
Soundscapes: Atmospheric and ambient layers to create depth and atmosphere in your tracks.
Spawn’s prompt interface includes a variety of sonic qualities and effect presets as well. Choose from descriptive properties like aggressive, airy, ambient, analog, bright, clean, complex, deep, dirty, distorted, dry, evolving, ethnic, filtered, harsh, huge, lush, processed, punch, simple, spacey, sub, underwater, vinyl, and wobble.
Those prompts guide the MIDI generation, but your control over the music doesn’t end there. Spawn includes additional effect layers like reverb, delay, chorus, distortion, and flanger. Granular control over generative music is precisely what’s been missing from other state of the art text-to-music generators like Suno and Udio.
An interview with the Lemonaide Team
What inspires a group of independent musicians and software developers to go all in on an AI MIDI product like this? I wanted to understand their greatest challenges as well as their biggest wins. So we interviewed their co-founders Michael Jacobs and Anirudh Mani along with Senior Research Scientist Julian Lenz to learn more.
Ezra: What inspired you to start an AI MIDI company?
MJ: It actually all started in my career as a rapper. I fell in love with creating music at age 11 (a lot of my musical inspiration was created out of a lot of Trauma I dealt with as a kid). I uploaded several music videos to YouTube which caught pretty solid steam back in the day.
After spending countless hours making music, I also decided to get into technology out of the goal of simply helping my family escape financial poverty. I ended up going to college for Technology, and spent 5 years at Google learning more about Cloud Computing and AI.
After learning the impact / potential AI has, I decided it would be awesome to create a Hip-Hop EP that was Co-Produced by AI. And from there, the inspiration continued to snowball into realizing, it would be awesome to make helpful tools for musicians using the unique inspirational value AI can provide.
Ani: As MJ was playing with Magenta and other tools, and building our initial offering of “Lemonaid”, I was a Research Scientist at Amazon’s Alexa group working on speech audio related research problems during the day, and experimenting with AI MIDI models for music at night as a very serious hobby, primarily to build something interesting for my own music.
When MJ and I crossed paths, it was serendipitous. Personally, I never thought I’d start a company, but I realized that co-founding “Lemonaide” was the best way for me to express my passion and skills for pushing AI research forward when applied to music, something I also went to Grad school for at Carnegie Mellon.
Growing up in a household obsessed with Hindustani Classical music in India, and learning piano and production at a very early stage, I see myself as an artist first, and a researcher second. I believe this instilled and solidified in me the ethical principles that we now practice at Lemonaide everyday – always building keeping the Artist in the center.
Ezra: What have been some of the greatest challenges you’ve faced so far?
MJ: It always starts with the training data. Using Pre-trained MIDI models only got us so far, but we very quickly realized in order to build truly meaningful algorithms, we would need to ethically source high quality MIDI from real human musicians that care about their craft, in order for our AI models to generate things that seem truly useful to the musician.
Outside of the training data, it also has to do with building custom MIDI algorithms that have the ability to learn the differences and patterns within the training data that make the music what it is. These are things like truly capturing velocity, strumming, offset timing – the list goes on, this work is detailed in this paper we published this past year.
Julian: The single biggest challenge I see is understanding exactly how people would like to interact with ML MIDI systems. The ‘old’ system is, “here’s 20 pre-made MIDI files, now go make this into a song”. Deep learning opens up so many new possibilities, and we believe that most of them in the MIDI realm haven’t been explored yet.
From a birds-eye view, we see from the rise of LLM chatbots that people love interactive systems that can be personalized to their exact task and creative/professional style. So, what is the MIDI version of that? This challenge is both technical and creative; and I think there is an opportunity to really redefine how people interact with MIDI in the future.
Another more practical challenge is that of data quantity. We are really proud of being Fairly Trained, which means every piece of our training data is legally cleared. But from the ML side, this of course means that we are working with datasets much smaller than a typical modern AI company.
To put it bluntly, I don’t think companies like OpenAI, Suno or Anthropic could make their type of models if they had to account for all of the data. So this puts a really fun challenge on the deep learning side, where we have to use every trick in the bag since we can’t just rely on scale.
Finally, there is an open challenge of getting models that know just how to break the ‘right’ rules, musically speaking. Most MIDI models, from Magenta days up until more recent state of the art versions, are pretty diatonic and well-behaved. Of course you can under-train them, or push the temperature, so they just get really weird outputs. But musically speaking, there is that beautiful gray zone where just a few rules are broken – the place where musicians like Stravinsky, Frank Zappa and Thelonius Monk thrive. It’s a huge challenge but I think we are on the right path.
Ani: One of the earliest challenges we were facing was difficulty in striking the balance between a truly generalizable MIDI model versus a musically interesting MIDI model, as we had limited MIDI data. We took an ensemble of models approach to provide a rounded experience for our user during inference, and in parallel continued to collect ethically sourced MIDI data directly from some amazing artists, and were able to overcome this hurdle pretty soon after.
At some point in the last year we also realized that there was a need to increase the overall quality of our MIDI output by capturing more expressive details, which are especially important for a genre like hiphop where the swing matters a lot.
This led to our research led by Julian on introducing a new MIDI tokenization technique called PerTok which captures such granular details while reducing sequence length up to 59% and vocabulary size up to 95% for polyphonic, monophonic and rhythmic tasks.
Our paper (https://arxiv.org/abs/2410.02060) was also published at ISMIR this year, and this research work is integral to the quality of outputs that our users love from our products Seeds, Collab Club and Spawn.
Ezra: What’s the most rewarding part of running a MIDI company?
MJ: One of the coolest things we are so proud of is the Collab Club. Being able to partner with Grammy Nominated, Billboard producers, meet with them on a weekly basis for over a 6-month period – collect their data, train algorithms with their feedback, define a monthly revenue share business model, and then deploy that to consumers who are looking for inspirational tooling. This is by far one of my favorite videos of one of our artists using their own model and highlighting the journey.
Ani: Lemonaide is an AI company and MIDI is our first love. ‘Controllability’ in AI modeling for music is a widely discussed topic and we believe MIDI modeling is going to be a key part of that conversation moving forward.
As MJ mentioned, everyday we cross paths with people that we adore and look up to as artists ourselves, and to be able to build something for other artists and help them is the most rewarding feeling ever.
Collab Club is one such example, where we built AI MIDI models with artists in their style, and now they are the ones who get the biggest share of earnings from these models. Lemonaide will continue to grow and evolve, but something that remains a constant for us is safeguarding the interests of the Artist while navigating this new uncertain world.
Community and Support
Lemonaide fosters a thriving community of producers and artists through its Discord channel and blog resources, offering tutorials, insights, and a space for collaboration. Whether you’re troubleshooting or sharing your latest creation, the Lemonaide community is there to support you.
Check out the Lemonaide and Spawn websites to learn more.
I’ve always been excited about music and technology. My piano teacher set up a simple MIDI studio in the 90’s, with a Dell computer, MIDI keyboard, and a KORG Audio Gallery GM Sound Module, which at the time was many times better than the soft synth sounds coming from Windows’ MIDI playback. We used software like TRAX and Midisoft Studio, and the MIDI demo songs were incredible and inspiring. I was amazed at how much potential there was to create music on the computer, with 64 tracks playing together at once. That’s when I first got into MIDI, learning how note on/off, velocity, and controller signals could control the expression of my music.
I would later continue my passion of music and technology to learn about DAWs and software instruments, and study Composition and Technology in Music and Related Arts at Oberlin Conservatory, and then Music for the Screen at Columbia College Chicago. As an adult, I would continue to follow those interests in music and tech working as a software engineer, film composer, conductor, inventor, creative technologist, and artist.
In 2020, my friend Federico Tobon and I created a musical robotic sculpture, Four Muses, using a repurposed Rock Band keyboard with MIDI out to control four electromechanical musical sculptures to create a robotic band. That’s when I started to learn MIDI at a lower level, reading the MIDI note from the keyboard from an Arduino, packaging that up into a message and sending that with an NRF24L01 transmitter, reading that note with another NRF24L01 receiver, and then triggering a corresponding solenoid or motor to strike an instrument wirelessly. One of the instruments used motors spinning at different frequencies to translate to pitch instead. Using Arduino I also programmed several modes for the keyboard and LED matrix, such as a live interaction mode, a playback mode, a sequencer mode, and a teaching mode.
I travel a lot, and am often writing music on the road. I had a Yamaha QY70 in the 2000’s which I used to love tracking songs on. But I’ve always wanted a tiny MIDI keyboard for my laptop. Even portable keyboards like the Korg nanoKEY were too big for me to use with a laptop on a plane, and took up too much space in my luggage. I also wanted something super portable that I could run warmups on with my chorus, the Trans Chorus of Los Angeles, before gigs.
I started tinkering with the SeeedStudio Xiao, a tiny, quarter-sized microcontroller that is cheap and extremely powerful, arduino compatible, and able to handle HID (Human interface Device) emulation as well as MIDI over USB. I made a breadboard prototype based on my learnings from Four Muses, adding some simple Arduino logic for supporting octave functions (simply add or remove multiples of 12 to the current note), sustain (send a control change) and modulation (another control change). I open-sourced my code here:
In a manufacturer’s components website, I sorted through hundreds of buttons and switches for days, sorting them by size and by newton force to find the smallest and lightest tiny buttons.
I decided to make a credit-card sized daughterboard for the Xiao that would have 18 keys on it, octave buttons, sustain, and modulation functions. Since the buttons weren’t touch sensitive, I added more buttons for setting global velocity levels (P/MF/FF). I also learned how to multiplex inputs and outputs into rows and columns, giving me 25 inputs from 5 rows and 5 columns using only 10 I/O pins, and diodes to filter out ghost notes. I learned how to use EasyEDA (free PCB Design software), built my first schematic and PCB design, and ordered my first PCBs.
The first batch I got back was a failure. It turns out the diodes I picked had too big of a forward voltage drop, killing my signal flow. ChatGPT was very useful for this, helping me troubleshoot what I’d done wrong, and helping me understand datasheets better to pick the right type of diodes.
I ordered a second batch, and they worked! I had the manufacturer assemble the boards, and then I manually hand-soldered the Xiao microcontrollers onto them, and programmed them in Arduino. I now had a tiny, USB-C MIDI keyboard that I could take anywhere with me, and since it was class compliant, it would work with phones and tablets too.
I started selling them online, and there’s been a lot of enthusiasm for these little boards. I’ve continued to iterate with them too. The next version I designed, the MidiCard Plus, has 25 keys, which I could do by combining the 3 velocity buttons into one button with a toggle function to toggle between P/MF/FF. It also has larger, sturdier buttons.
I’m working on future versions of the MidiCard as well with multiple color options, and just using a bare SAM D21 chip instead of the full Xiao module. This will prevent me from having to hand-solder each board and will give them a slimmer profile. I’m also designing cases for the MidiCard, and am open to other suggestions. (Maybe wireless or MIDI 2.0 features!)
If you’re interested in purchasing a MidiCard, they can be found at:
The ethics of AI music became a heated topic at industry panels in 2024, sparking debates around the notion of “fair use”. AI music tech companies have admitted to training their models on copyright protected music, without a license or consent from rights holders in the RIAA.
Over ten thousand major figures from the industry, including Thom Yorke of Radiohead, signed a shared statement near the end of the year, expressing their belief that “unlicensed use of creative works for training generative AI is a major, unjust threat to the livelihoods of the people behind those works, and must not be permitted.”
During September 2024, Billboard broke a story about Michael Smith, a man accused of $10M in wire fraud. He published large quantities of algorithmically generated music and used bot farms to stream that audio to turn a profit. Billboard’s story stoked concerns that instant song generation will pollute DSPs and siphon revenue away from “real” artists and labels.
There has been little to no discussion of AI MIDI generation software or its ethical implications. Instant song generators appeal to a substantially larger market size and pose a more direct threat to DSPs. MIDI tools are generally considered too niche for a non-technical audience.
The ethical advantages of AI MIDI generation
There are several ethical advantages to generating MIDI files instead of raw audio.
First, MIDI’s small file size conserves energy during training, generation, and file storage. That means that it’s not only cheaper to operate, but may have a lower environmental impact.
Second, independent artists are partnering with AI MIDI companies to create fine-tuned models that replicate their style, and selling access to those models as a vector for passive income.
AI audio models are fine tuned with artists as well, but the major AI song generation companies are scraping audio from the internet or licensing stock music in bulk. They don’t partner with artists on fine-tunes, which means labels and rights holders will make the most money.
In this article, I’ll review a couple big AI music ethics stories from 2024 and celebrate a few MIDI generation companies that have been working hard to set up fair deals with artists.
RIAA: Narrowing AI music ethics to licensing and copyright
Debates over the ethics of AI music reached a boiling point in June 2024, in a historic lawsuit by the RIAA against Suno and Udio. Both companies scraped copyright-protected music from the internet and used that audio to train their own commercial AI song generators.
Suno andUdio currently grant their users unlimited commercial license for audio created on their platform. This means vocalists can create albums without musicians and producers. Media creatives can add music to their productions without sync licensing fees.
Labels are predictably upset by Suno and Udio’s commercial license clause, which they feel competes directly with their own sync libraries and threatens to erode their bottom line.
To be clear, it’s not that the music industry wants to put a stop to generative AI music. On the contrary, they want to train AI models on their own music and create a new revenue source.
UMG struck up a partnership with AI music generator Klay, announced October 2024. If Klay can compete with Suno and Udio, it will likely be regarded as the “ethical alternative” and set a standard for other major labels to follow.
Fairly Trained: A system of accountability for data licensing
The non-profit organizationFairly Trained and their founder Ed Newton Rex have put a spotlight on AI audio model training and the need for better licensing standards. They offer an affordable certification for audio companies that want to signal compliance with industry expectations.
Watch the discussion below to learn more about Fairly Trained:
AI MIDI companies with Fairly Trained certifications
At least two AI MIDI companies have been certified Fairly Trained:
Lemonaide Music is a state of the art AI MIDI and audio generation plugin. They partner with music producers to fine tune models on their MIDI stems. When users purchase a model from the app store, artists receive a 40% revenue share. In early November 2024, Lemonaide announced a new partnership with Spawn, bringing advanced sound design and color field visualization to the MIDI generation experience.
Soundful Music is a B2C music generation service that includes MIDI stems as part of their core product. They hire musicians to create sound templates and render variations of that content from a cloud service. Soundful is a web browser application.
Both of these companies have proven that they sourced their training data responsibly.
The environmental cost of AI music generation
I spoke to several machine learning experts who agreed that MIDI training, generation and storage should consume less energy than raw audio generation, by virtue of the file size alone.
There is no public data on energy consumption at top AI audio generation companies. What we do have are reports on the data centers where those operations are held. Journalists like Karen Hao have ramped up coverage of the data centers housing our generative model operations and demonstrated the impact they’re having on vulnerable populations.
Economists have suggested that the US will benefit from domestic energy production. They encourage the construction of miniature nuclear plants and data centers.
Big tech companies do have sustainability initiatives, but they focus primarily on carbon emission reduction. The depletion of freshwater resources has received less attention from the media, appears to be less tightly regulated, and may end up being the most important issue.
🚫 In May 2024, Microsoft’senvironmental sustainability report confirmed that they failed to replenish the water consumed by datacenter operations. Their AI services led to a 34% increase in water consumption against previous years.
Freshwater restoration efforts led to a 10% recovery of the 55,500 megaliters consumed. The 50k megaliter loss would be enough to fill 20,000 standard Olympic-size swimming pools.
🚫 Amazon Web Services (AWS) appears to be a major offender, but their water use is mostly private. They’ve made a commitment to become “water positive” by 2030, a distant goal post considering the growing rate of consumption.
According to UNESCO, 50% of the people on our planet suffer from extreme water scarcity for at least one month every year. Do we want our generative audio products contributing to that problem, when there might be a better alternative?
How DataMind reduced the impact of their AI music app
Professor Ben Cantil, founder ofDataMind Audio, is the perfect example of a founder who prioritized ethics during model training.
DataMind partners directly with artists to train fine-tuned models on their style. He offers a generous 50% revenue share and credits them directly on the company’s website.
Their brick and mortar headquarters are powered by solar energy. They formerly completed a government sponsored study that reduced the local GPU energy footprint by 40% over a two month period. Cantil has made a public commitment to use green GPU centers whenever they outsource model training.
His main product is a tone morphing plugin called The Combobulator. Watch a demo of the plugin below to see how it works:
Exploring AI MIDI software further
We’ve already covered some of the Fairly Trained AI MIDI generation companies. Outside that camp, you can also check out HookTheory’s state of the art AI MIDI generation feature Aria.
The AI MIDI startup Samplab has also released several free browser tools in 2024, though they specialize in audio to MIDI rather than generative music.
Delphos Music is a B2B AI MIDI modeling service that gives musicians the power to fine-tune MIDI models on their own audio stems. Their service is currently high touch and operated through a web browser, but they do have a DAW plugin in beta.
Staccato is building an AI MIDI browser app that can analyze and expand on MIDI content. I’ve also seen a private demo from the AI text-to-MIDI generation startup Muse that looked very promising.
Bookmark our AI MIDI generator article to follow along. We update the list a few times a year and keep it up to date.
GeoShred introduces a new paradigm for musical instruments, offering fluid expressiveness through a performance surface featuring the innovative “Almost Magic” pitch rounding. This cutting-edge software combines a unique performance interface with physics-based models of effects and musical instruments, creating a powerful tool for musicians. Originally designed for iOS devices, GeoShred is now available as an AUv3 plug-in for desktop DAWs, expanding its reach and integration into professional music production workflows.
GeoShred Studio, an AUv3 plug-in, runs seamlessly on macOS devices. Paired with GeoShredConnect, musicians can establish a MIDI/MPE connection between their iOS device running GeoShred and GeoShred Studio, enabling them to incorporate GeoShred’s expressive multi-dimensional control into their desktop production setup. This connection allows users to perform and record tracks from their iOS device as MIDI/MPE, which can be further refined and edited in the production process.
iCloud integration ensures that preset edits are synchronized between the iOS and macOS versions of GeoShred. For example, a preset saved on the iOS version of GeoShred automatically syncs with GeoShred Studio, providing a seamless experience across platforms.
Equipped with a built-in guitar physical model and 22 modeled effects, GeoShred Studio offers an impressive array of sonic possibilities. For those looking to expand their musical palette, an additional 33 physically modeled instruments from around the globe are available as in-app purchases (IAPs). These instruments range from guitars and bowed strings to woodwinds, brass, and traditional Indian and Chinese instruments.
GeoShred Studio is designed to be performed expressively using GeoShred’s isomorphic keyboard.
GeoShred Studio is also compatible with MPE controllers, conventional MIDI controllers, and even breath controllers, offering a wide range of performance options. GeoShred Studio is free to download, but core functionality requires the purchase of GeoShred Studio Essentials, which includes distinct instruments separate from those in the iOS/iPadOS app, and iOS/iPadOS purchases do not transfer.
Works with MacOS Catalina or greater.
GeoShred, unleash your musical potential!
We are offering a 25% discount on all iOS/iPadOS and MacOS products in celebration of GeoShred 7, valid until October 10, 2024. Pricing table at moforte.com/pricing
Before we get into Anthony’s presentation at NAMM 2024, I wanted to give a bit of insight about why what he did had such a personal impact on me. I learned synthesis on an Arp 2600!
I started college at Wesleyan University in 1970, the same year that Alvin Lucier , the well respected electronic music composer started teaching there. John Cage had been at Wesleyan only a few years before.
Wesleyan was (and still is) a great, small liberal arts school.
I was studying Jazz with Clifford Thorton, who was in Sun Ra’s Arkestra and Sam Rivers, who had played with Miles.
Wesleyan has an amazing world music program and I was also studying African Drumming with Abraham Konbena Adzenyah, who was both an Associate Professor and simultaneously studying for his GED High School diploma. I would occasionally jam with L. Shankar, the Indian violinist.
John McLaughlin was studying Vina at Wesleyan in the fall of 1970 and used the Wesleyan cafeteria to rehearse his new band , The Mahavishnu Orchestra. For several weeks in a row, I would hang out after lunch and listen for free as Billy Cobham, Jerry Goodman, Jan Hammer, Rick Laird, and McLaughlin rehearsed. McLaughlin and L Shankar would team up later in Shakti.
To say the music scene at Wesleyan at the time was eclectic is an incredible understatement.
Anyway, back to Alvin Lucier. I didn’t know what to expect when I showed up in early September, 1970 for that first class in Electronic Music 101, but it was more surprising then anything I could have imagined. Alvin Lucier introduced himself and it sounded like this. Ma, ma, ma ,ma My,… na, na na, na, name… is Alvin …La, la, lucier and I will ……ba,ba, ba Be your …..Tea, tea, teacher. At that time, Lucier had a horrific stutter and he had just the year before written his signature work ” I Am Sitting In A Room”.
The text spoken by Lucier describes the process of the work, concluding with a reference to his own stuttering:
I am sitting in a room different from the one you are in now. I am recording the sound of my speaking voice and I am going to play it back into the room again and again until the resonant frequencies of the room reinforce themselves so that any semblance of my speech, with perhaps the exception of rhythm, is destroyed. What you will hear, then, are the natural resonant frequencies of the room articulated by speech. I regard this activity not so much as a demonstration of a physical fact, but more as a way to smooth out any irregularities my speech might have.
Alvin Lucier
in October of 1970, I went to a performance of I Am Sitting In A Room at the Wesleyan coffee house. Musicologists often fail to mention Lucier’s stutter, but to me it was the essence of the piece. Lucier sat in middle of the coffee house with a microphone, two tape recorders and speakers positioned around the small room in quad. He started repeating the text of the piece over and over again, with each consonant causing him to stutter.
It was uncomfortable to listen to and watch. But the repetitive stutter was being fed back into the room and doubled by two tape recorders which were slightly out of sync. This created an amazing cascade of stuttered rhythms.
Then after about 10 minutes, Lucier hit a switch and the sound from the speakers stopped. What happened next was magical. He then said perfectly clearly and without any stutter “I am sitting in a room different from the one you are in now.” He then repeated that single phrase and with each repetition, his stutter started to come back.
Then he kicked in the speakers and the whole process started over again. He repeated that process three times over the course of about 40 minutes. You watched in real time as someone with a serious speech impediment used electronic art to fix it, but it couldn’t last, he would always fall back into the halting , uncomfortable pattern of stuttering.
It was both powerful and heartbreaking and one of the most courageous pieces of art I have ever witnessed.
At the Wesleyan Electronic Music Studio, I learned synthesis on two Arp 2600 and an Arp 2500 Sequencer set up in Quad. Students in Electronic Music classes could get the keys to the studio and I spent many nights in my 4 years at Wesleyan creating sounds until the wee hours of morning and then tearing them apart and starting over from scratch to make a new patch. It was there working with the Arp 2600 that I learned the sheer joy of making sounds with synthesizers.
Anthony’s passion for teaching synthesis brought all of that joy back.
The Lifetime Achievement Awards at April NAMM 2023
At the April NAMM show we gave out MIDI Association LifeTime Achievement Awards to the founding fathers of modern synthesise and music production including Alan Pearlman from ARP.
So when Dina Pearlman who runs the Arp Foundation and received the award in 2023 on her father’s behalf came to us at NAMM 2024 and asked for a favor, we couldn’t say no.
She had scheduled a performance by Anthony for the Arp Foundation booth which was only a 5 by 10 booth against the wall at the front of Hall A. We had a much larger booth and headphones for 50 guests.
So even though we had 23 sessions arranged already, we had to say yes and boy are we glad we did!
If you don’t know who Anthony is, he was one of the main people who brought synthesizers to Hollywood.
He was heavily involved with the Synclavier and its development and he and his partner, Brian Banks had notable credits on some of the first films to almost exclusively use synths including: WarGames (1984), Starman (1984), The Color Purple (1985), Stand by Me (1985), Planes, Trains and Automobiles (1987), Young Guns (1988) and Internal Affairs (1990).
Here is a 1979 poster promoting the Synner’s (that’s Anthony and his partner at the time Brian Banks) performances of classical pieces at LA County Museum of Natural History.
Anthony has also been the synthesist on many amazing records including Micheal Jackson’s Thriller (produced by Quincy Jones). There isa link at the bottom of the article to his Youtube page which has a bunch of great videos including his presentation at NAMM 2024 where he invited young people on stage and taught them how to get cool sounds out of the Arp 2600 in a matter of minutes.
His passion for synthesis brought back college memories of discovering the joys of analog modular synths for the first time guided by Alvin Lucier.
Anthony Maranelli’s Presentation of the ARP 2600 at The MIDI Association Booth NAMM 2024
ShowMIDI is a multi-platform GUI application to effortlessly visualize MIDI activity, filling a void in the available MIDI monitoring solutions.
Instead of wading through logs of MIDI messages to correlate relevant ones and identify what is happening, ShowMIDI visualizes the current activity and hides what you don’t care about anymore. It provides you with a real-time glanceable view of all MIDI activity on your computer.
When something happens that you need to analyze in detail, you can press the spacebar to pause the data and see a real-time static snapshot. Once you’re doing, press the spacebar again and ShowMIDI resumes with the latest activity.
This animation shows the difference between a traditional MIDI monitor on the left and ShowMIDI on the right:
Open-source and multi-platform
ShowMIDI is written in C++ and JUCE for macOS, Windows and Linux, an iOS version is in the works. You can find the source code in the GitHub repository.
Alongside the standalone application, ShowMIDI is also available as VST2, VST3, AUv2, AUv3, CLAP and LV2 plugins for DAWs and hosts that support MIDI effect plugins. This makes it possible to visualize MIDI activity for individual channels and to save these with your session.
Introduction and overview
Below is an introduction video that shows how the standalone version of ShowMIDI works. You get a glimpse of what the impetus for creating this tool was and how you can use it with multiple MIDI devices. Seeing the comparison between traditional MIDI monitor logs (including my ReceiveMIDI tool) and ShowMIDI’s visualization, clearly illustrates how the information becomes much easier to understand and consume.
Smart and getting smarter
ShowMIDI also analyzes the MIDI data and displays compound information, like RPN and NRPN messages that are constituted out of multiple CC messages. RPN 6, which is the MPE configuration message, is also detected and adds MPE modes to the channels that are part of an MPE zone.
This is just the beginning, additional visualizations, smart analysis and interaction modes will continue to be added. As MIDI 2.0 becomes more widely available, ShowMIDI will be able to switch its display mode to take those messages into account too.
The MIDI Association has enjoyed an ongoing partnership with Microsoft, collaborating to ensure that MIDI software and hardware play nicely with the Windows operating system. All of the major operating systems companies are represented equally in the MIDI Association, and participate in standards development, best practices, and more to help ensure the user experience is great for everyone.
As an AI music generator enthusiast, I’ve taken a keen interest in Microsoft Research (MSR) and their machine learning music branch, where experiments about music understanding and generation have been ongoing.
It’s important to note that this Microsoft Research team is based in Asia and enjoys the freedom to experiment without being bound to the product roadmaps of other divisions of Microsoft. That’s something unique to MSR, and gives them incredible flexibility to try almost anything. This means that their MIDI generation experiments are not necessarily an indication of Microsoft’s intention to compete in that space commercially.
That being said, Microsoft has integrated work from their research team in the past, adding derived features to Office, Windows, and more, so it’s not out of the question that these AI MIDI generation efforts might some day find their way into a Windows application, or they may simply remain a fun and interesting diversion for others to experiment with and learn from.
The Microsoft AI Music research team, operating under the name Muzic, started publishing papers in 2020 and have shared over fourteen projects since then. You can find their Github repository here.
The majority of Muzic’s machine learning efforts have been based on understanding and generating MIDI music, setting them apart from text-to-music audio generation services like Google’s MusicLM, Meta’s MusicGen, and OpenAI’s Jukebox.
On May 31st, Muzic published a research paper on their first ever text-to-midi application, MuseCoco. Trained on a reported 947,659 Standard MIDI files (a file format which includes MIDI performance information) across six open source datasets, developers found that it significantly outperformed the music generation capabilities of GPT-4 (source).
It makes sense that MuseCoco would outperform GPT-4, having trained specifically on musical attributes in a large MIDI training dataset. Details of the GPT-4 prompt techniques were included on figure 4 of the MuseCoco article, shown below. The developers requested output in ABC notation, a shorthand form of musical notation for computers.
Text to MIDI prompting with GPT-4
I have published my own experiments with GPT-4 music generation, including code snippets that produce MIDI compositions and will save the MIDI files locally using JS Node with the MidiWriter library. I also shared some thoughts about AutoGPT music generation, to explore how AI agents might self-correct and expand upon the short duration of GPT-4 MIDI output.
Readers who don’t have experience with programming can still explore MIDI generation with GPT-4 through a browser DAW called WavTool. The application includes a chatbot who understands basic instructions about MIDI and can translate text commands into MIDI data within the DAW. I speak regularly with their founder Sam Watkinson, and within the next months we anticipate some big improvements.
Unlike WavTool, there is currently no user interface for MuseCoco. As is common with research projects, users clone the repository locally and then use bash commands in the terminal to generate MIDI data. This can be done either on a dedicated Linux install, or on Windows through the Windows Subsystem for Linux (WSL). There are no publicly available videos of the service in action and no repository of MIDI output to review.
You can explore a non-technical summary of the full collection of Muzic research papers to learn more about their efforts to train machine learning models on MIDI data.
Although non-musicians often associate MIDI with .mid files, MIDI is much larger than just the Standard MIDI File format. It was originally designed as a way to communicate between two synthesizers from different manufacturers, with no computer involved. Musicians tend to use MIDI extensively for controlling and synchronizing everything from synthesizers, sequencers, lighting, and even drones. It is one of the few standards which has stood the test of time.
Today, there are different toolkits and APIs, USB, Bluetooth, and Networking transports, and the new MIDI 2.0 standard which expands upon what MIDI 1.0 has evolved to do since its introduction in 1983.
MIDI 2.0 updates for Windows in 2023
While conducting research for this article, I discovered the Windows music dev blog where it just so happens that the Chair of the Executive Board of the MIDI Association, Pete Brown, shares ongoing updates about Microsoft’s MIDI and music efforts. He is a Principal Software Engineer in Windows at Microsoft and is also the lead of the MIDI 2.0-focused Windows MIDI Services project.
I reached out to Pete directly and was able to glean the following insights.
Q: I understand Microsoft is working on MIDI updates for Windows. Can you share more information?
A: Thanks. Yes, we’re completely revamping the MIDI stack in Windows to support MIDI 2.0, but also add needed features to MIDI 1.0. It will ship with Windows, but we’ve taken a different approach this time, and it is all open source so other developers can watch the progress, submit pull requests, feature requests, and more. We’ve partnered with AMEI (the Japan equivalent of the MIDI Association) and AmeNote on the USB driver work. Our milestones and major features are all visible on our GitHub repo and the related GitHub project.
Q: What is exciting about MIDI 2.0?
A: There is a lot in MIDI 2.0 including new messages, profiles and properties, better discovery, etc., but let me zero in on one thing: MIDI 2.0 builds on the work many have done to extend MIDI for greater articulation over the past 40 years, extends it, and cleans it up, making it more easily used by applications, and with higher resolution and fidelity. Notes can have individual articulation and absolute pitch, control changes are no longer limited to 128 values (0-127), speed is no longer capped at the 1983 serial 31,250bps, and we’re no longer working with a stream of bytes, but instead with a packet format (the Universal MIDI Packet or UMP) that translates much better to other transports like network and BLE. It does all this while also making it easy for developers to migrate their MIDI 1.0 code, because the same MIDI 1.0 messages are still supported in the new UMP format.
At NAMM, the MIDI Association showcased a piano with the plugin software running in Logic under macOS. Musicians who came by and tried it out (the first public demonstration of MIDI 2.0, I should add) were amazed by how much finer the articulation was, and how enjoyable it was to play.
Q: When will this be out for customers?
A: At NAMM 2023, we (Microsoft) had a very early version of the USB MIDI 2.0 driver out on the show floor in the MIDI Association booth, demonstrating connectivity to MIDI 2.0 devices. We have hardware and software developers previewing bits today, with some official developer releases coming later this summer and fall. The first version of Windows MIDI Services for musicians will be out at the end of the year. That release will focus on the basics of MIDI 2.0. We’ll follow on with updates throughout 2024.
Q: What happens to all the MIDI 1.0 devices?
A: Microsoft, Apple, Linux (ALSA Project), and Google are all working together in the MIDI association to ensure that the adoption of MIDI 2.0 is as easy as possible for application and hardware developers, and musicians on our respective operating systems. Part of that is ensuring that MIDI 1.0 devices work seamlessly in this new MIDI 2.0 world.
On Windows, for the first release, class-compliant MIDI 1.0 devices will be visible to users of the new API and seamlessly integrated into that flow. After the first release is out and we’re satisfied with performance and stability, we’ll repoint the WinMM and WinRT MIDI 1.0 APIs (the APIs most apps use today) to the new service so they have access to the MIDI 2.0 devices in a MIDI 1.0 capacity, and also benefit from the multi-client features, virtual transports, and more. They won’t get MIDI 2.0 features like the additional resolution, but they will be up-leveled a bit, without breaking compatibility. When the MIDI Association members defined the MIDI 2.0 specification, we included rules for translating MIDI 2.0 protocol messages to and from MIDI 1.0 protocol messages, to ensure this works cleanly and preserves compatibility.
Over time, we’d expect new application development to use the new APIs to take advantage of all the new features in MIDI 2.0.
We are excited to announce that voting for the MIDI Innovation Awards 2023 is officially open. In the tradition of our past two successful years, we continue to celebrate innovation, creativity, and the fantastic array of talent in our MIDI community. As MIDI marks its 40th birthday this year, we’re thrilled to see how far we’ve come and anticipate the future with MIDI 2.0, which is set to inspire another revolution in music.
This year, you can discover and cast your votes for the most innovative MIDI-based projects across five categories until July 21st. The categories are:
Commercial Hardware Products
Commercial Software Products
Prototypes and non-commercial hardware products
Prototypes and non-commercial software products
Artistic/Visual Project or Installation
The MIDI Innovation Awards 2023, a joint effort by Music Hackspace, The MIDI Association, and NAMM, showcases over 70 innovative entries ranging from MIDI controllers to art installation. The three entries with the most votes will be shortlisted and presented to our stellar jury who will select each category winner.
We’re proud to announce new partnerships for 2023 with Sound On Sound, the world’s leading music technology magazine, and Music China, who will provide exhibition space to our winners at their Autumn 2023 trade fair in Shanghai. Our winners also receive significant support from The MIDI Association and Music Hackspace for the development of MIDI 2.0 prototypes, coverage in Sound On Sound, and an opportunity to exhibit at the 2023 NAMM Show.
The MIDI Innovation Awards entries will be evaluated by a distinguished jury representing various facets of the music industry. The esteemed judges include Jean-Michel Jarre, Nina Richards, Roger Linn, Michele Darling, Bian Liunian, and Pedro Eustache. They’ll be assessing entries based on innovation, inspiring and novel qualities, interoperability, and practical / commercial viability.
Mark these key dates in your calendar:
July 21st: Voting closes, jury deliberation starts
August 16th: Finalists announced
September 16th: Live show online – winners revealed
October: Finalists are invited to participate in the Sound On Sound SynthFest UK and Music China, including the User Choice Awards competition
Vote for your favorites now, and help us champion the most innovative MIDI designs of 2023!
For more details, visit the MIDI Innovation Awards page.
Together, let’s keep the music playing and the innovations flowing!
This article is to explain the benefits of MIDI 2.0 to people who use MIDI.
If you are a MIDI developer looking for the technical details about MIDI 2.0, go to this article updated to reflect the major updates published to the core MIDI 2.0 specs in June 2023.
The following movie explains the basics of MIDI 2.0 in simple language.
MIDI 2.0 Overview
Music is the universal language of human beings and MIDI is the universal digital language of music
Back in 1983, musical instrument companies that competed fiercely against one another nonetheless banded together to create a visionary specification—MIDI 1.0, the first universal Musical Instrument Digital Interface.
Nearly four decades on, it’s clear that MIDI was crafted so well that it has remained viable and relevant. Its ability to join computers, music, and the arts has become an essential part of live performance, recording, smartphones, and even stage lighting.
Now, MIDI 2.0 takes the specification even further, while retaining backward compatibility with the MIDI 1.0 gear and software already in use. MIDI 2.0 is the biggest advance in music technology in 4 decades. It offers many new features and improvements over MIDI 1.0, such as higher resolution, bidirectional communication, dynamic configuration, and enhanced expressiveness.
MIDI 2.0 Means Two-way MIDI Conversations
MIDI 1.0 messages went in one direction: from a transmitter to a receiver. MIDI 2.0 is bi-directional and changes MIDI from a monologue to a dialog. With the new MIDI-CI (Capability Inquiry) messages and UMP EndPoint Device Discovery Messages, MIDI 2.0 devices can talk to each other, and auto-configure themselves to work together.
They can also exchange information on functionality, which is key to backward compatibility—MIDI 2.0 gear can find out if a device doesn’t support MIDI 2.0, and then simply communicate using MIDI 1.0.
MIDI 2.0 Specs are mostly for MIDI developers, not MIDI users
If you are a MIDI user trying to read and make sense of many of the new MIDI 2.0 specs, MIDI 2.0 may seem really complicated.
Yes, it actually is more complicated because we have given hardware and software MIDI developers and operating system companies the ability to create bi-directional MIDI communications between devices and products.
MIDI 2.0 is much more like an API (application programming interface, a set of functions and procedures allowing the creation of applications that access the features or data of an operating system, application, or other service) than a simple one directional set of data messages like MIDI 1.0.
Just connect your MIDI gear exactly like you always have and then the operating systems, DAWs and MIDI applications take over and try to auto-configure themselves using MIDI 2.0.
If they can’t then they will work exactly like they do currently with MIDI 1.0.
If they do have mutual MIDI 2.0 features, then these auto-configuration mechanisms will work and set up your MIDI devices for you.
MIDI 2.0 works harder so you don’t have to.
Just Use MIDI
As you can see the only step that MIDI users really have to think about is Step 7 -Use MIDI
MIDI 2.0 expands MIDI to 256 Channels in 16 Groups so you will start to see applications and products that display Groups, but these are not so different than the 16 Ports in USB MIDI 1.0.
We have tried very hard to make it simple for MIDI users, but as any good developer will tell you – making it easy for users often makes more work for developers.
MIDI-CI Profile Configuration
At Music China 2023, there were a number of public presentations of recent MIDI specifications that the MIDI Association has been working on.
Joe Shang from Medeli who is on the MIDI Association Technical Standards board put it very well at the International MIDI Forum at Music China.
He said that with the recent updates published in June 2023, MIDI 2.0 had a strong skeleton, but now we need to put muscles on the bones. He also said that Profiles are the muscles we need to add.
He is right. This will be “The Year Of Profiles” for The MIDI Association.
We have now adopted 7 Profiles.
MIDI-CI Profile for General MIDI 2 (GM2 Function Block Profile)
MIDI-CI Profile for for General MIDI 2 Single Channel (GM2 Melody Channel)
MIDI-CI Profile for Drawbar Organ Single Channel
MIDI-CI Profile for Rotary Speaker Single Channel
MIDI-CI Profile for MPE (Multi Channel)
MIDI-CI Profile for Orchestral Articulation Single Channel
We also have completed the basic design of three more Profiles.
MIDI-CI Profile for Orchestral Articulation Single Channel
MIDI-CI Profile for Piano Single Channel
MIDI-CI Profile for Camera Control Single Channel
At Music China and at the meeting we had at the same time at Microsoft office in Redmond, MIDI Association and AMEI members were talking about the UDP Network transport specification that we are working on and the need to Profiles for all sorts of Effects ( Chorus, Reverb, Phaser, Distortion, etc.), Electronic Drums, Wind Controllers and DAW control.
The MIDI 2.0 overview defined a defined sets of rules for how a MIDI device sends or responds to a specific set of MIDI messages to achieve a specific purpose or suit a specific application.
Advanced MIDI users might be familiar with manually “mapping” all the controllers from one device to another device to make them talk to each other. Most MIDI users are familiar with MIDI Learn.
If 2 devices agree to use a common Profile, MIDI-CI Profile Configuration can auto-configure the mappings. Two devices learn what their common capabilities are and then can auto-configure themselves to respond correctly to a whole set of MIDI messages.
MIDI gear can now have Profiles that can dynamically configure a device for a particular use case. If a control surface queries a device with a “mixer” Profile, then the controls will map to faders, panpots, and other mixer parameters. But with a “drawbar organ” Profile, that same control surface can map its controls automatically to virtual drawbars and other keyboard parameters—or map to dimmers if the profile is a lighting controller. This saves setup time, improves workflow, and eliminates tedious manual programming.
Actually General MIDI was an example of what a Profile could do.
GM was a defined set of responses to set of MIDI messages. But GM was done before the advent of the bi-directional communication enabled by MIDI-CI.
So in the MIDI 1.0 world, you sent out a GM On message, but you never knew if the device on the other side could actually respond to the message. There was no dialog to establish a connection and negotiate capabilities.
But bi-directional commmunication allows for much better negotiation of capabilities (MIDI -CI stands for Capabilities Inquiry after all).
One of the important things about Profiles is that they can negotiate a set of features like the number of Channels a Profile wants to use. Some Profiles like the Piano Profile are Single Channel Profiles and get turned on and used on any single channel you want.
Let’s use the MPE Profile as an example. MPE works great, but it has no bi-directional communication for negotiation.
With MIDI 2.0 using a mechanism called the Profile Details Inquiry message, two products can agree that they want to be in MPE Mode, agree on the number of channels that both devices can support, the number of dimensions of control that both devices support (Pitch Bend, Channel Pressure and a third dimension of control) and even if both devices support high resolution bi-polar controllers. Bi-directional negotiation just makes things work better automatically.
Let’s consider MIDI pianos. Pianos have a lot of characteristics in common and we can control those characteristics by a common set of MIDI messages. MIDI messages used by all pianos include Note On/Off and Sustain Pedal.
But when we brought all the companies that made different kinds of piano products together (digital piano makers like Kawai, Korg and Roland, companies like Yamaha and Steinway that make MIDI controlled acoustic pianos and softsynth companies like Synthogy that makes Ivory), we realized that each company had different velocity and sustain pedal response curves.
We decided that if we all agreed on a Piano Profile with an industry standard velocity and pedal curve, it would greatly enhance interoperability.
Orchestral Articulation is another great example. There are plenty of great orchestral libraries, but each company uses different MIDI messages to switch articulations. Some companies use notes on the bottom of the keyboard and some use CC messages. So we came up with way to put the actual articulation messages right into the expanded fields in the MIDI 2.0 Note On message.
The following video has a demonstration of how Profile Configuration works.
The MIDI Association adopted the first Profile in 2022, the Default Control Change Mapping Profile.
Many MIDI devices are very flexible in configuration to allow a wide variety of interaction between devices in various applications. However, when 2 devices are configured differently, there can be a mismatch that reduces interoperability.
This Default Control Change Mapping Profile defines how devices can be set to a default state, aligned with core definitions of MIDI 1.0 and MIDI 2.0. In particular, devices with this Profile enabled have the assignment of Control Change message destinations/functions set to common, default definitions.
Because there were less than 128 controllers in MIDI 1.0, even the most commonly used could be reassigned to other functions.
Turning on this Profile sets commonly used controllers such as Volume (CC7), Pan (CC-10) , Sustain (CC64), Cutoff (CC 74), Attack (CC73), Decay (CC75), Release (CC72), Reverb Depth (CC91) to their intended assignment.
The video above included a very early prototype of the Drawbar Organ Profile and Rotary Speaker Profile.
We have just finished videos for Music China. Here are short videos for:
Property Exchange is a set of System Exclusive messages that devices can use discover, get, and set many properties of MIDI devices. The properties that can be exchanged include device configuration settings, a list of patches with names and other meta data, a list of controllers and their destinations, and much more.
Property Exchange can allow for devices to auto map controllers, choose programs by name, change state and also provide visual editors to DAW’s without any prior knowledge of the device or specially crafted software. This means that Devices could work on Windows, Mac, Linux, IOS and Web Browsers and may provide tighter integrations with DAW’s and hardware controllers.
Property Exchange uses JSON inside of the System Exclusive messages. JSON (JavaScript Object Notation) is a human readable format for exchanging data sets. The use of JSON expands MIDI with a whole new area of potential capabilities.
The MIDI Association has completed and published the following Property Exchange Resources.
Property_Exchange_Foundational_Resources
Property_Exchange_Mode_Resources
Property_Exchange_ProgramList_Resource
Property_Exchange_Channel_Resources
Property_Exchange_LocalOn_Resource
Property_Exchange_MaxSysex8Streams_Resource
Property_Exchange_Get_and_Set_Device_State
Property_Exchange_StateList
Property_Exchange_ExternalSync_Resource
Property_Exchange_Controller_Resources
One of the most interesting of these PE specifications is Get and Set Device State which allows for an Initiator to send or receive Device State, or in other words, to capture a snapshot which might be sent back to the Device at a later time.
The primary goal of this application of Property Exchange is to GET the current memory of a MIDI Device. This allows a Digital Audio Workstation (DAW) or other Initiator to store the State of a Responder Device between closing and opening of a project. Before a DAW closes a project, it performs the GET inquiry and the target Device sends a REPLY with all data necessary to restore the current State at a later time. When the DAW reopens a project, the target Device can be restored to its prior State by sending an Inquiry: Set Property Data Message.
Data included in each State is decided by the manufacturer but typically might include the following properties (not an exhaustive list):
Current Program
All Program Parameters
Mode: Single Patch, Multi, etc.
Current Active MIDI Channel(s)
Controller Mappings
Samples and other binary data
Effects
Output Assignments
Essentially this will allow hardware devices to have the same amount of recallability as soft synths when using a DAW.
There are a number of MIDI Association companies who are actively working on implementing this MIDI 2.0 Property Exchange Resource.
MIDI-CI Process Inquiry
Version 1.2 of MIDI-CI introduces a new category of MIDI-CI, Process Inquiry, which allows one device to discover the current values of supported MIDI Messages in another device including: System Messages, Channel Controller Messages and Note Data Messages
Here some use cases:
Query the current values of parameters which are settable by MIDI Controller messages.
Query to find out which Program is currently active
Query to find out the current song position of a sequence.
For Those Who Want To Go Deeper
In the previous version of this article, we provide some more technical details and we will retin them here for those who want to know more, but if you are satified with knowing what MIDI 2.0 can do for you, you can stop reading here.
MIDI Capability Inquiry (MIDI-CI) and UMP Discovery
To protect backwards compatibility in a MIDI environment with expanded features, devices need to confirm the capabilities of other connected devices. When 2 devices are connected to each other, they confirm each other’s capabilities before using expanded features. If both devices share support for the same expanded MIDI features they can agree to use those expanded MIDI features.
The additional capabilities that MIDI 2.0 brings to devices are enabled by MIDI-CI and by new UMP Device Discovery mechanisms.
New MIDI products that support MIDI-CI and UMP Discovery can be configured by devices communicating directly themselves. Users won’t have to spend as much time configuring the way products work together.
Both MIDI-CI and UMP Discovery share certain common features:
They separate older MIDI products from newer products with new capabilities and provides a mechanism for two MIDI devices to understand which new capabilities are supported.
They assume and require bidirectional communication. Once a bi-directional connection is established between devices, query and response messages define what capabilities each device has then negotiate or auto-configure to use those features that are common between the devices.
MIDI DATA FORMATS AND ADDRESSING
MIDI 1.0 BYTE STREAM DATA FORMAT
MIDI 1.0 originally defined a byte stream data format and a dedicated 5 pin DIN cable as the transport. When computers became part of the MIDI environment, various other transports were needed to carry the byte stream, including software connections between applications. What remained common at the heart of MIDI 1.0 was the byte stream data format.
The MIDI 1.0 Data Format defines the byte stream as a Status Byte followed by data bytes. Status bytes have the first bit set high. The number of data bytes is determined by the Status.
Addressing in MIDI 1.0 DATA FORMAT
The original MIDI 1.0 design had 16 channels. Back then synthesizers were analog synths with limited polyphony (4 to 6 Voices) that were only just starting to be controlled by microprocessors.
In MIDI 1.0 byte stream format, the value of the Status Byte of the message determines whether the message is a System Message or a Channel Voice Message. System Messages are addressed to the whole connection. Channel Voice Messages are addressed to any of 16 Channels.
Addressing in USB MIDI 1.0 DATA FORMAT
In 1999 when the USB MIDI 1.0 specification was adopted, USB added the concept of a multiple MIDI ports. You could have 16 ports each with its own 16 channels on a single USB connection.
The Universal MIDI Packet (UMP) Format
The Universal MIDI Packet (UMP) Format, introduced as part of MIDI 2.0, uses a packet-based data format instead of a byte stream. Packets can be 32 bits, 64 bits, 96 bits, or 128 bits in size.
This format, based on 32 bit words, is more friendly to modern processors and systems than the byte stream format of MIDI 1.0. It is well suited to transports and processing capabilities that are faster and more powerful than those when MIDI 1.0 was introduced in 1983.
More importantly, UMP can carry both MIDI 1.0 protocol and MIDI 2.0 protocol. It is called a Universal MIDI Packet because it handles both MIDI 1.0 and MIDI 2.0 and is planned to be used for all new transports defined by the MIDI Association including the already updated USB MIDI 2.0 specification and the Network Transport specification that we are currently working on.
Addressing in UMP FORMAT
The Universal MIDI Packet introduces an optional Group field for messages. Each Message Type is defined to be addressed with a Group or without a Group field (“Groupless”).
Channels, Groups and Groupless Messages in UMP
These mechanisms expand the addressing space beyond that of MIDI 1.0.
Groupless Messages are addressed to the whole connection. Other messages are addressed to a specific Group, either as a System message for that whole Group or to a specific Channel within that Group.
UMP continues this step by step expansion of MIDI capabilities while maintaining the ability to map back to MIDI products from 1983.
UMP carries 16 Groups of MIDI Messages, each Group containing an independent set of System Messages and 16 MIDI Channels. Therefore, a single connection using the Universal MIDI Packet carries up to 16 sets of System Messages and up to 256 Channels.
Each of the 16 Groups can carry either MIDI 1.0 Protocol or MIDI 2.0 Protocol. Therefore, a single connection can carry both protocols simultaneously. MIDI 1.0 Protocol and MIDI 2.0 Protocol messages cannot be mixed together within 1 Group.
Groups are slightly different than Ports, but for compatibility with legacy 5 PIN DIN, a single 16 channel Group in UMP can be easily mapped back to a 5 PIN DIN Port or to a Port in USB MIDI.
You will soon start to see applications which offer selection for Groups and Channels.
The newest specifications in June 2023 add the concept of Groupless Messages and Function Blocks.
Groupless Messages are used to discover details about a UMP Endpoint and its Function Blocks.
Some Groupless Messages are passed to operating systems and applications which use them to provide you with details of what functions exist in the MIDI products you have.
Now a MIDI Device can declare that Groups 1,2,3,and 4 are all used for a single function for 96 Channels (for example a mixer or a sequencer).
All of these decisions had to be made very carefully to ensure that everything would map back and work seamlessly with MIDI 1.0 products from 1983.
UMP Discovery
The UMP Format defines mechanisms for Devices to discover fundamental properties of other Devices to connect, communicate and address messages. Discoverable properties include:
1. Device Identifiers: Name, Manufacturer, Model, Version, and Product Instance Id (e.g. unique identifier).
2. Data Formats Supported: Version of UMP Format (necessary for expansion in the future), MIDI Protocols, and whether Jitter Reduction Timestamps can be used.
3. Device Topology: including which Groups are currently valid for transmitting and receiving messages and which Groups are available for MIDI-CI transactions.
These properties can be used for Devices to auto-configure through bidirectional transactions, thereby enabling the best connectivity between the Devices. These properties can also provide useful information to users for manual configuration.
UMP handles both MIDI 1.0 and MIDI 2.0 Protocols
A MIDI Protocol is the language of MIDI, or the set of messages that MIDI uses. Architectural concepts and semantics from MIDI 1.0 are the same in the MIDI 2.0 Protocol. Compatibility for translation to/from MIDI 1.0 Protocol is given high priority in the design of MIDI 2.0 Protocol.
In fact, Apple has used MIDI 2.0 as the core data format for Core MIDI with hi resolution 16 bit velocity and 32 bit controllers since the Monterey OS was released in 2021. So if you have an Apple computer or iOS device, you probably already have MIDI 2.0. in your operating system. Apple has taken care of detecting that when you plug in a MIDI 1.0 device, the Apple operating system translated MIDI 2.0 messages into MIDI 1.0 messages so you can just keep making music.
This seamless integration of MIDI 1.0 and MIDI 2.0 is the goal of the numerous implementations that have been released or are under development. Google has added MIDI 2.0 protocol to Android in Android 13, Analog Devices has added it to their A2B network. Open Source ALSA implementations for Linux and Microsoft Windows drivers/APIs are expected to be released later this year.
One of our main goals in the MIDI Association is to bring added possibilities to MIDI without breaking anything that already works and making sure that MIDI 1.0 devices work smoothly in a MIDI 2.0 environment.
The MIDI 1.0 Protocol and the MIDI 2.0 Protocol have many messages in common and messages that are identical in both protocols.
The MIDI 2.0 Protocol extends some MIDI 1.0 messages with higher resolution and new features. There are newly defined messages. Some can be used in both protocols and some are exclusive to the MIDI 2.0 Protocol.
New UMP messages allow one device to query what MIDI protocols another device supports and they can mutually agree to use a new protocol.
In some cases (the Apple example above is a good one), an operating system or an API might have additional means for discovering or selecting Protocols and JR Timestamps to fit the needs of a particular MIDI system.
MIDI 2.0 Protocol- Higher Resolution, More Controllers and Better Timing
The MIDI 2.0 Protocol uses the architecture of MIDI 1.0 Protocol to maintain backward compatibility and easy translation while offering expanded features.
Extends the data resolution for all Channel Voice Messages.
Makes some messages easier to use by aggregating combination messages into one atomic message.
Adds new properties for several Channel Voice Messages.
Adds several new Channel Voice Messages to provide increased Per-Note control and musical expression.
Adds New data messages include System Exclusive 8 and Mixed Data Set. The System Exclusive 8 message is very similar to MIDI 1.0 System Exclusive but with 8-bit data format. The Mixed Data Set Message is used to transfer large data sets, including non-MIDI data.
Keeps all System messages the same as in MIDI 1.0.
Expanded Resolution and Expanded Capabilities
This example of a MIDI 2.0 Protocol Note message shows the expansions beyond the MIDI 1.0 Protocol equivalent. The MIDI 2.0 Protocol Note On has higher resolution Velocity. The 2 new fields, Attribute Type and Attribute data field, provide space for additional data such as articulation or tuning details
Easier to Use: Registered Controllers (RPN) and Assignable Controllers (NRPN)
Creating and editing RPNs and NRPNs with MIDI 1.0 Protocol requires the use of compound messages. These can be confusing or difficult for both developers and users. MIDI 2.0 Protocol replaces RPN and NRPN compound messages with single messages. The new Registered Controllers and Assignable Controllers are much easier to use.
The MIDI 2.0 Protocol replaces RPN and NRPN with 16,384 Registered Controllers and 16,384 Assignable Controller that are as easy to use as Control Change messages.
Managing so many controllers might be cumbersome. Therefore, Registered Controllers are organized in 128 Banks, each Bank having 128 controllers. Assignable Controllers are also organized in 128 Banks, each Bank having 128 controllers.
Registered Controllers and Assignable Controllers support data values up to 32bits in resolution.
MIDI 2.0 Program Change Message
MIDI 2.0 Protocol combines the Program Change and Bank Select mechanism from MIDI 1.0 Protocol into one message. The MIDI 1.0 mechanism for selecting Banks and Programs requires sending three MIDI messages. MIDI 2.0 changes the mechanism by replicating the Banks Select and Program Change in one new MIDI 2.0 Program Change message. Banks and Programs in MIDI 2.0 translate directly to Banks and Programs in MIDI 1.0.
Built for the Future
MIDI 1.0 is not being replaced. Rather it is being extended and is expected to continue, well integrated with the new MIDI 2.0 environment. It is part of the Universal MIDI Packet, the fundamental MIDI data format.
In the meantime, MIDI 1.0 works really well. In fact, MIDI 2.0 is just more MIDI. As new features arrive on new instruments, they will work with existing devices and system. The same is true for the long list of other additions made to MIDI since 1983. MIDI 2.0 is just part of the evolution of MIDI that has gone on for 36 years. The step by step evolution continues.
Many MIDI devices will not need any of the new features of MIDI 2.0 in order to perform all their functions. Some devices will continue to use the MIDI 1.0 Protocol while using other extensions of MIDI 2.0, such as Profile Configuration, Property Exchange or Process Inquiry.
MIDI 2.0 is the result of a global, decade-long development effort.
Unlike MIDI 1.0, which was initially tied to a specific hardware implementation, a new Universal MIDI Packet format makes it easy to implement MIDI 2.0 on any digital transport. MIDI 2.0 already runs on USB, Analog Devices A2b Bus and we are working on an network transport spec.
To enable future applications that we can’t envision today, there’s ample space reserved for brand-new MIDI messages.
Further development of the MIDI specification, as well as safeguards to ensure future compatibility and growth, will continue to be managed by the MIDI Manufacturers Association working in close cooperation with the Association of Musical Electronics Industry (AMEI), the Japanese trade association that oversees the MIDI specification in Japan.
MIDI will continue to serve musicians, DJs, producers, educators, artists, and hobbyists—anyone who creates, performs, learns, and shares music and artistic works—in the decades to come.
MIDI 2.0 FAQs
We have been monitoring the comments on a number of websites and wanted to provide some FAQs about MIDI 2.0 as well as videos of some requested MIDI 2.0 features.
Will MIDI 2.0 devices need to use a new connector or cable?
No, MIDI 2.0 is a transport agnostic protocol.
Transport- To transfer or convey from one place to another
Agnostic- designed to be compatible with different devices
Protocol-a set of conventions governing the treatment and especially the formatting of data in an electronic communications system
That’s engineering speak for MIDI 2.0 is a set of messages and those messages are not tied to any particular cable or connector.
When MIDI first started it could only run over the classic 5 Pin DIN cable and the definition of that connector and how it was built was described in the MIDI 1.0 spec.
However soon the MIDI Manufacturers Association and Association of Music Electronic Industries defined how to run MIDI over many different cables and connectors.
So for many years, MIDI 1.0 has been a transport agnostic protocol..
MIDI 1.0 messages currently run over 5 PIN Din, serial ports, Tip Ring Sleeve 1/8″ cables, Firewire, Ethernet and USB transports.
Can MIDI 2.0 run over those different MIDI 1.0 transports now?
Yes, MIDI 2.0 products can use MIDI 1.0 protocol and even use 5 Pin DIN if they support the Automated Bi-Directional Communication of MIDI-CI and :
One or more Profiles controllable by MIDI-CI Profile Configuration messages.
Any Property Data exchange by MIDI-CI Property Exchange messages.
Any Process Inquiry exchange by MIDI-CI Process Inquiry messages.
However to run the Universal MIDI Packet and take advantage of MIDI 2.0 Voice Channel messages with expanded resolution, there needs to be new specifications written for each transport.
The new Universal Packet Format that will be common to all new transports defined by AMEI and The MIDI Associaton. The new Universal Packet contains both MIDI 1 .0 messages and MIDI 2.0 Voice Channel Messages plus some messages that can be used with both.
The most popular MIDI transport today is USB. The vast majority of MIDI products are connected to computers or hosts via USB.
The USB specification for MIDI 2.0 is the first transport specification completed, but we are working on a UMP Network Transport for Ethernet and Wireless Connectivity
Can MIDI 2.0 provide more reliable timing?
Yes. Products that support the new USB MIDI Version 2 UMP format can provide higher speed for better timing characteristics. More data can be sent between devices to greatly lessen the chances of data bottlenecks that might cause delays.
UMP format also provides optional “Jitter Reduction Timestamps”. These can be implemented for both MIDI 1.0 and MIDI 2.0 in UMP format.
With JR Timestamps, we can mark multiple Notes to play with identical timing. In fact, all MIDI messages can be tagged with precise timing information. This also applies to MIDI Clock messages which can gain more accurate timing.
Goals of JR Timestamps:
Capture a performance with accurate timing
Transmit MIDI message with accurate timing over a system that is subject to jitter
Does not depend on system-wide synchronization, master clock, or explicit clock synchronization between Sender and Receiver.
Note: There are two different sources of error for timing: Jitter (precision) and Latency (sync). The Jitter Reduction Timestamp mechanism only addresses the errors introduced by jitter. The problem of synchronization or time alignment across multiple devices in a system requires a measurement of latency. This is a complex problem and is not addressed by the JR Timestamping mechanism.
Also we have added Delta Time Stamps to the MIDI Clip File Specification.
Can MIDI 2.0 provide more resolution?
Yes, MIDI 1.0 Voice Channel messages are usually 7 bit (14 bit is possible by not so widely implemented because there are only 128 CC messages).
With MIDI 2.0 Voice Channel Messages velocity is 16 bit.
The 128 Control Change messages, 16,384 Registered Controllers, 16,384 Assignable Controllers, Poly and Channel Pressure, and Pitch Bend are all 32 bit resolution.
Can MIDI 2.0 make it easier to have microtonal control and different non-western scales?
Yes, MIDI 2.0 Voice Channel Messages allow Per Note precise control of the pitch of every note to better support non-western scales, arbitrary pitches, note retuning, dynamic pitch fluctuations or inflections, or to escape equal temperament when using the western 12 tone scale.
MIDI Association partner AudioCipher Technologies has just published Version 3.0 of their melody and chord progression generator plugin. Type in a word or phrase and AudioCipher will automatically generate MIDI files for any virtual instrument in your DAW. AudioCipher helps you overcome creative block with the first ever text-to-MIDI VST for music producers.
Chord generator plugins have been a hallmark of the MIDI effects landscape for years. Software like Captain Chords, Scaler 2, and ChordJam are some of the most popular in the niche. Catering to composers, these apps tend to feature music theory notation concepts like scale degrees and Roman numerals. They provide simple ways to apply chord inversions, sequencing and control the BPM. This lets users modify chord voicings and edit MIDI in the plugin before dragging it to a track.
AudioCipher offers similar controls over key signature, scale selection, chord selection, rhythm control, and chord/rhythm randomization. However, by removing in-app arrangement, users get a simplified interface that’s easier to understand and takes up less visual real estate in the DAW. Continue your songwriting workflow directly in the piano roll to perform the same actions that you would in a VST.
AudioCipher retails at $29.99 rather than the $49-99 price points of its competitors. When new versions are released, existing customers receive free software upgrades forever. Three versions have been published in the past two years.
Difficulty With Chord Progressions
Beginner musicians often have a hard time coming up with chord progressions. They lack the skills to experiment quickly on a synth or MIDI keyboard. Programming notes directly into the piano roll is a common workaround, but it’s time consuming, especially if you don’t know any music theory and are starting from scratch.
Intermediate musicians may understand theory and know how to create chords, but struggle with finding a good starting point or developing an original idea.
Common chord progressions are catchy but run the risk of sounding generic. Pounding out random chords without respect for the key signature is a recipe for disaster. Your audience wants to hear that sweet spot between familiarity and novelty.
Most popular music stays in a single key and leverages chord extensions to add color. The science of extending a chord is not too complicated, but it can take time to learn.
Advanced musicians know how to play outside the constraints of a key, using modulation to prepare different chords that delight the listener. But these advanced techniques do require knowledge and an understanding of how to break the rules. It’s also hard to teach old dogs new tricks, so while advanced musicians have a rich vocabulary, they are at risk of falling into the same musical patterns.
These are a few reasons that chord progression generators have become so popular among musicians and songwriters today.
AudioCipher’s Chord Progression Generator
Example of AudioCipher V3 generating chords and melody in Logic Pro X
Overthinking the creative process is a sure way to get frustrated and waste time in the DAW. AudioCipher was designed to disrupt ordinary creative workflows and introduce a new way of thinking about music. The first two versions of AudioCipher generated single-note MIDI patterns from words. Discovering new melodies, counter-melodies and basslines became easier than ever.
Version 3.0 continues the app’s evolution with an option to toggle between melody and chord generator modes. AudioCipher uses your word-to-melody cipher as a constant variable, building a chord upon each of the encrypted notes. Here’s an overview of the current features and how to use them to inspire new music.
AudioCipher V3.0 Features
Choose from 9 scales: The 7 traditional modes, harmonic minor, and the twelve-note chromatic scale. These include Major, Minor, Dorian, Phrygian, Lydian, Mixolydian, and Locrian.
Choose from six chord types including Add2, Add4, Triad, Add6, 7th chords, and 9ths.
Select the random chord feature to cycle through chord types. The root notes will stay the same (based on your cryptogram) but the chord types will change, while sticking to the notes in your chosen scale.
Control your rhythm output: Whole, Half, Quarter, Eighth, Sixteenth, and all triplet subdivisions.
Randomize your rhythm output: Each time you drag your word to virtual instrument, the rhythm will be randomized with common and triplet subdivisions between half note and 8th note duration.
Combine rhythm and chord randomization together to produce an endless variety of chord progressions based on a single word or phrase of your choice. Change the scale to continue experimenting.
Use playback controls on the standalone app to audition your text before committing. Drag the MIDI to your software instrument to produce unlimited variation and listen back from within your DAW.
The default preset is in C major with a triad chord type. Use the switch at the top of the app to move between melody and chord generator modes.
How to Write Chord Progressions and Melodies with AudioCipher
Get the creative juices flowing with this popular AudioCipher V3 technique. You’ll combine the personal meaning of your words with the power of constrained randomness. Discover new song ideas rapidly and fine-tune the MIDI output in your piano roll to make the song your own.
Choose a root and scale in AudioCipher
Switch to the Chord Generator option
Select “Random” from the chord generator dropdown menu
Turn on “Randomize Rhythm” if you want something bouncy or select a steady rhythm with the slider
Type a word into AudioCipher that has meaning to you (try the name of something you enjoy or desire)
Drag 5-10 MIDI clips to your software instrument track
Choose a chord progression from the batch and try to resist making any edits at first
Next we’ll create a melody to accompany your chord progression.
Keep the same root and scale settings
Switch to Melody Generator mode
Create a new software instrument track, preferably with a lead instrument or a bass
Turn on “Randomize Rhythm” if it was previously turned off
Drag 5-10 MIDI clips onto this new software instrument track
Move the melodies up or down an octave to find the right pitch range to contrast your chords
Select the best melody from the batch
Adjust MIDI in the Piano Roll
Once you’ve found a melody and chord progression that inspires you, proceed to edit the MIDI directly in your piano roll. Quantize your chords and melody in the piano roll, if the triplets feel too syncopated for your taste. You can use sound design to achieve the instrument timbre you’re looking for. Experiment with additional effects like adding strum and arpeggio to your chords to draw even more from your progressions.
With this initial seed concept in place, you can go on to develop the rest of the song using whatever techniques you’d like. Return to AudioCipher to generate new progressions and melodies in the same key signature. Reference the circle of fifths for ideas on how to update your key signature and still sound good. Play the chords and melody on a MIDI keyboard until you have ideas for the next section on your own. Use your DAW to build on your ideas until it becomes a full song.
Technical specs
AudioCipher is a 64-bit application that can be loaded either as a standalone or VST3 / Audio Component in your DAW of choice. Ableton, Logic Pro X, FL Studio, Reaper, Pro Tools, and Garageband have been tested and confirmed to work. Installers are available for both MacOS and Windows 10, with installer tutorials available on the website’s FAQ page.
A grassroots hub for innovative music software
Along with developing VSTs and audio sample packs, AudioCipher maintains an active blog that covers the most innovative trends in music software today. MIDI.org has published AudioCipher’s partnerships with AI music software developers like MuseTree and AI music video generator VKTRS.
AudioCipher’s recent articles dive into the cultural undercurrents of experimental music philosophy. One piece describes sci-fi author Philip K Dick’s concept of “synchronicity music”, exploring the role of musicians within simulation theory his VALIS trilogy. Another article outlines the rich backstory of Plantwave, a device that uses electrodes to turn plants into MIDI music.
The blog also advocates small, experimental software like Delay Lama, Riffusion and Text To Song, sharing tips about how to use and access each of them. Grassroots promotion of these tools brings awareness to the emerging technology and spurs those developers to continue improving their apps.
The Register posted an article today about Firefox supporting Web MIDI.
MIDI was created by a small group of American and Japanese synthesiser makers. Before it, you could hook synths, drum machines and sequences together, but only through analogue voltages and pulses. Making, recording and especially touring electronic music was messy, drifty and time-consuming. MIDI made all that plug-and-play, and in particular let $500 personal computers take on many of the roles of $500/day recording studios; you could play each line of a score into a sequencer program, edit it, copy it, loop it, and send it back out with other lines.
Home taping never killed music, but home MIDI democratised it. Big beat, rave, house, IDM, jungle, if you’ve shaken your booty to a big shiny beat any time in the last forty years, MIDI brought the funk.
It’s had a similar impact in every musical genre, including film and gaming music, and contemporary classical. Composers of all of the above depend on digital audio workstations, which marshall multiple tracks of synthesised and sampled music, virtual orchestras all defined by MIDI sequences. If you want humans to sing it or play it on instruments made of wood, brass, string and skins, send the MIDI file to a scoring program and print it out for the wetware API. Or send it out to e-ink displays, MIDI doesn’t care.
By now, it doesn’t much matter what genre you consider, MIDI is the ethernet of musical culture, its bridge into the digital.
The Register Post was inspired by this Tweet from the BBC Archives.
#OnThisDay 1984: Tomorrow’s World had instruments that sounded exactly like different instruments, thanks to the magic of microprocessors. pic.twitter.com/wbhm14WakD
GLASYS (Gil Assayas) was a winner of the MIDI Association’s 2022 Innovation Awards for artistic installations. He’s a keyboard player, composer, sound designer, and video content creator who currently performs live with Todd Rundgren’s solo band. The internet largely knows GLASYS for his viral MIDI art and chiptune music.
We spoke with Gil to learn more about how he makes music. I’ll share that interview with him below. First, let’s have a quick review of his newly released chiptune album.
MIDI Art that Tugs on my Heartchips
The latest record from GLASYS, Tugging On My Heartchips, debuted January 2023 and captures the nostalgia of early 8-bit game music perfectly, with classic sound patches that transport the listener back in time. The arrangements are true to the genre and some of the songs even have easter eggs to find.
Gil created MIDI art to inspire multiple songs on the album, elevating the album’s conceptual value into uncharted meta-musical territory. He even created music video animations of the MIDI notes in post production. On track two, The MIDI Skull Song, you can almost hear the swashbuckling pirates in search of buried treasure. Take a listen here:
The MIDI Gargoyle Song features an even more complex drawing, with chromatic lines to put any pianist’s hands in a pretzel. Once the picture is finished, Gil’s gargoyle comes to life in a funny animation and dances to the finished song. It’s the first time I’ve seen someone create animations from MIDI notes in the piano roll!
Heartchips delivers all the bubbly synths and 8-bit percussion you could want from a chiptune album. But with Gil, there’s more to the music than aesthetic bravado. Where other artists lean on retro sounds to make mid-grade music sound more interesting, GLASYS has mastered the composing and arrangement skills needed to evoke the spirit of early 90s games.
It can take several listens to focus on each of the album’s sonic elements. The mix and panning are impeccable. Gil rolls off some of the harsh overtones in the instrument’s waveform, to make it easier on our ears. But there’s something special happening in the arrangement, that we discussed in more detail during our interview.
Drawing from a classic 8-Bit technique
The playful acoustics of Heartchips mask Gil’s complex harmonic and rhythmic ideas like a coating of sugar.
Gil gives each instrument a clear sense of purpose and identity, bringing them together in a song that tells a story without words. To accomplish this, he uses techniques from early game music, back when composers had only 5 instruments channels to use.
In the 1980s and 90s, as portable gaming consoles became popular, there was a limit to the number of notes a microchip could store and play at once. Chords had to be hocketed, or broken up into separate notes, so that the other instrument channels could be used for lead melody, accompaniment and percussion.
As a result, the classic 8-bit composers avoided sustained chords unless the entire song was focused on that one instrument. Every instrument took on an almost melodic quality.
While Heartchips doesn’t limit itself to five instrument channels per song, it does align with the idea that harmony and chord progressions should be outlined rather than merely sustained as a chord.
When GLASYS outlines a chord as an arpeggio in the bass, you’ll often hear two or three countermelodies in the middle and upper registers. Each expresses a unique idea, completely different from the others, yet somehow working perfectly with them. That’s the magic of his art.
There are a few moments on the album when chords are sustained for a measure at a time, like on the tracks No School Today or Back to Reality. These instances where chords are used acquire an almost dramatic effect because it disrupts your expectations as a listener.
Overall, I found Tugging on my Heartchips to be a fun listening experience with lots of replay value.
What’s up with GLASYS in 2023?
In February 2023, GLASYS branched out from MIDI piano roll drawings to audio spectrograms. This new medium grants him the ability to draw images with more than MIDI blocks.
A spectrogram is a kind of 2D image. It’s a visual map of the sound waves in an audio file. It reads left to right, just like a piano roll. The X axis represents time and the Y axis represents frequency.
Some other artists (Aphex Twin, Dizasterpeace) have hidden images in spectrograms before, but those previous efforts only generated white noise. GLASYS has defied everyone’s expectations with spectrogram art created from his own voice and keyboards.
Here’s one of his latest videos, humming and whistling accompaniment to a piano arrangement in order to create a dragon. It may be the first time in history that something like this has been performed and recorded:
An Interview with GLASYS (Gil Assayas)
I’ve really enjoyed the boundary-defying MIDI art that comes from GLASYS, so I reached out on behalf of the MIDI Association to ask a few questions and learn more. Here’s that conversation.
E: You’ve released an album in January called Tugging On My Heartchips. Can you talk about what inspired you to write these songs and share any challenges that came up while creating it?
G: Sure. Game boy was a big part of my childhood. It was the only console we had, because in Israel it was more expensive and harder to get a hold of other systems.
The first games I had were Links Awakening, Donkey Kong, Battletoads, and Castlevania. I loved the music and what these composers could achieve with 4 tracks, using pulse wave and noise. Somehow they could still create these gorgeous melodies.
My experience growing up with those games was the main inspiration for this album. I never really explored these sounds in previous albums. I always went more for analog synths.
E: Your first GLASYS EP, The Pressure, came out in 2016 but your Youtube channel goes back almost a decade. Can you tell me a bit about the history of GLASYS?
G: When I first got started, I was playing in a band in Israel and every so often I would write a solo work that didn’t fit the band’s sound. So I created the GLASYS channel to record those ideas occasionally. After moving to the United States, I had a lot more time to focus on my own music and that’s when things started picking up.
E: Can you tell me more about your mixing process? Do you write, record, and mix everything yourself?
G: Yes, I write everything myself and record most of it in my home studio. Nowadays I mix everything myself, though in the past I’ve worked with some great mixing engineers such as Tony Lash (Dandy Warhols, Elliot Smith).
E: I think the playful tone of Heartchips will carry most listeners away. It’s easy to overlook the difficulty of creating a chiptune album like this, not to mention all the video work you do on social media to promote it. You’ve nailed the timbre of your instruments and compositional style.
G: Yeah, mixing chiptune can be trickier than it seems because of all the high end harmonic content. None of the waveforms are filtered and everything has overtones coming through. I found that a little bit of saturation on the square waves, pulse waves, and a little bitcrushing can smooth out the edges a bit. EQ can take out some of the harsh highs, and you can do some sidechaining. These are things you can’t do on a gameboy or NES.
E: How much of your time is spent composing versus mixing and designing your instruments?
G: Mixing takes a lot longer. Composition is the easy part. The heard part is making something cool enough to want to share. I can be a bit of a perfectionist. So I’ll do a mix, try to improve it, rinse and repeat ten revisions until I’m happy with it. That’s one of the reasons it can be better to do the mix myself, haha.
E: Before this interview, we were talking about aphantasia where people can’t visualize images but they can still dream in images. Do you ever dream in music?
G: Dreams are such an emotional experience. When you get a musical idea in your dreams, more often than not you forget it when you wake up. But when you do remember it, it’s very surreal. Actually, my first song ever was based on a purple hippo I saw in my dream. I was 5 years old, heard the melody, figured it out and wrote it down with my dad.
E: What inspired you to get into MIDI art?
G: Well, there were a couple of things. Back in 2017, an artist by the name of Savant created some amazing MIDI art – I believe he was the first to do it in a way that sounds musical. He inspired other artists to create MIDI art, such as Andrew Huang who created his famous MIDI Unicorn (which I performed live in one of my videos).
There was another piece in particular that blew me away, this Harry Potter logo MIDI art that uses themes from Harry Potter, masterfully created by composer Hana Shin. I don’t particularly care for Harry Potter, but I just found the concept and execution really inspiring and I thought it would be awesome to perform something like that live. In 2021, Jacob Collier did a few videos where he spelled out words in real time, which proved that it’s possible and motivated me to finally give it a shot.
My idea was to build on the MIDI art concept and draw things that were meaningful to me, such as video game logos and characters – and do it live, so I needed to write them in a way that would be possible to play with two hands. I actually just wanted to do it once or twice but it was the audience who motivated me to keep going. It got such a huge response, I’ve ended up doing nearly fifty of them. I’m now focusing on other things, but I might get back to MIDI art in the future
E: Do you have any advice for MIDI composers who struggle coming up with new ideas?
G: Sure, I do get writers block sometimes. As far as advice goes… I know how it goes where you keep rewriting something you’ve already created before. Everyone has their subconscious biases, things that they tend to go to without thinking. So even though they’re trying to do something new, they end up repeating themselves. It can be a struggle for sure.
If you find yourself sitting in front of your daw not knowing what to do, then don’t sit in front of your daw. Go outside and take a guitar with you and start jamming. Sometimes a change of environment, breaking the habit and getting out of the rut doing the same thing over and over can really help you.
Listen to something entirely different, then new ideas will come. A lot of the problem comes from listening to the same stuff or only listening to one genre of music. So everything you write starts to sounds like them.
Listen to music outside of the genres you like. For example, if you never listen to Cuban music, listen to it for a week. Some of it will creep into your subconscious, you might end up writing some indie rock song with cuban elements that’s awesome and would sound entirely new.
E: Are there any organizational tricks that you use to manage the sheer volume of musical ideas you come up with?
G: Yeah I used to have a lot of little ideas and save them in different folders, but it was too difficult to get back to things that I had written a year ago. Time goes by, you forget about how you felt when you wrote that thing, you feel detached from it.
If I decide to do something, I work on just one or two tracks until I’m done with them. I don’t record every idea I have either. I have to feel motivated enough to do something with it.
E: Do you have perfect pitch? Can you hear music in your head before playing it?
G: Definitely, yeah I can hear music in my head. I do have perfect pitch but it has declined a little bit as I get older.
E: What can we expect from GLASYS in 2023?
G: Lots of new music and videos – I’ve got many exciting ideas that I’m looking forward to sharing!
To learn more about Gil’s musical background, check out interviews with him here, here, and here. You can also visit the GLASYS website or check out his Youtube channel.
If you enjoyed this artist spotlight and want to read more about innovative musicians, software, and culture in 2023, check out the AudioCipher blog. We’ve recently covered Holly Herndon’s AI music podcast Interdependence, shared a new Japanese AI music plugin called Neutone, and promoted an 80-musician Songcamp project that created over 20,000 music NFTs in just six weeks. AudioCipher is a MIDI plugin that turns words into music within your DAW.
Hans Zimmer is one of the most famous and prolific film composer in the world.
He has composed music for over 150 films including blockbusters like The Lion King, Gladiator, The Last Samurai, the Pirates of the Caribbean, The Dark Knight, Inception, Interstellar and Dunkirk.
In a recent interview with Ben Rogerson from MusicRadar, this is what he said about MIDI.
MIDI is one of the most stable computer protocols ever written.
MIDI saved my life, I come from the days of the Roland MicroComposer, typing numbers, and dealing with Control Voltages. I was really happy when I managed to have eight tracks of sequencer going. From the word go, I thought MIDI was fabulous.
by Hans Zimmer for MusicRadar
To read the whole article, click on the link below
A new generation of AI MIDI software has emerged over the past 5 years. Google, OpenAI, and Spotify have each published a free MIDI application powered by machine learning and artificial intelligence.
The MIDI Association reported on innovations in this space previously. Google’s AI Duet, their Music Transformer, and Massive Technology’s AR Pianist all rely on MIDI to function properly. We’re beginning to see the emergence of browser and plugin applications linked to cloud services, running frameworks like PyTorch and TensorFlow.
In this article we’ll cover three important AI MIDI tools – Google Magenta Studio, OpenAI’s MuseNet, and Spotify’s Basic Pitch MIDI converter.
Google Magenta Studio
Google Magenta is a hub for music and artificial intelligence today. Anyone who uses a DAW and enjoys new plugins should check out the freeMagenta Studio suite. It includes five applications. Here’s a quick overview of how they work:
Continue – Continue lets users upload a MIDI file and leverage Magenta’s music transformer to extend the music with new sounds. Keep your temperature setting close to 1.0-1.2, so that your MIDI output sounds similar to the original input but with variations.
Drumify – Drumify creates grooves based on the MIDI file you upload. They recommend uploading a single instrumental melody at a time, to get the best results. For example, upload a bass line and it will try to produce a drum beat that compliments it, in MIDI format.
Generate – Maybe the closest tool in the collection to a ‘random note generator’, Generate uses a Variational Autoencoder (MusicVAE) and has trained on millions of melodies and rhythms within its dataset.
Groove – This nifty tool takes a MIDI drum track and uses Magenta to modify the rhythm slightly, giving it a more human feel. So if your music was overly quantized or had been performed sloppily, Groove could be a helpful tool.
Interpolate – This app asks you for two separate MIDI melody tracks. When you hit generate, Magenta composes a melody that bridges them together.
The Magenta team is also responsible for Tone Transfer, an application that transforms audio from one instrument to another. It’s not a MIDI tool, but you can use it in your DAW alongside Magenta Studio.
OpenAI MuseNet
MuseTree – Free Nodal AI Music Generator
OpenAI is a major player in the AI MIDI generator space. Their Dalle 2 web application took the world by storm this year, creating stunningly realistic artwork and photographs in any style. But what you might not know is that they’ve created two major music applications, MuseNet and Jukebox.
MuseNet – MuseNet is comparable to Google’s Continue, taking in MIDI files and generating new ones. But users can constrain the MIDI output to parameters like genre and artist, introducing a new layer of customization to the process.
MuseTree – If you’re going to experiment with MuseNet, I recommend using this open source project MuseTree instead of their demo website. It’s a better interface and you’ll be able to create better AI music workflows at scale.
Jukebox – Published roughly a year after MuseNet, Jukebox focuses on generating audio files based on a set of constraints like genre and artist. The output is strange, to say the least. It does kind of work, but in other ways it doesn’t. The application can also be tricky to operate, requiring a Google Colab account and some patience troubleshooting the code when it doesn’t run as expected.
Spotify is the third major contender in this AI music generator space. A decade ago, in 2013, they published a mobile-friendly music creation app called Soundtrap. So they’re no stranger to music production tools. As for machine learning, there’s already a publicly available Spotify AI toolset that powers their recommendation engine.
Basic Pitch is a free browser tool that lets you upload any song as an audio file and convert it into MIDI. Basic pitch leverages machine learning to analyze the audio and predict how it should be represented in MIDI. Prepare to do some cleanup, especially if there’s more than one instrument in the audio.
Spotify hasn’t published a MIDI generator like MuseNet or Magenta Studio’s Continue. But in some ways Basic Pitch is even more helpful, because it generates MIDI you can use right away, for a practical purpose. Learn your favorite music quickly!
The Future of AI MIDI Generators
The consumer applications we’ve mentioned, like Magenta Studio, MuseTree, and Basic Pitch, will give you a sense of their current capabilities and limitations. For example, Magenta Studio and MuseTree work best when they are fed special types of musical input, like arpeggios or pentatonic blues melodies.
Product demos often focus on the best use cases, but as you push these AI MIDI generators to their limits, the output becomes less coherent. That being said, there’s a clear precedent for future innovation and the race is on, amongst these big tech companies, to compete and innovate in the space.
Private companies, like AIVA and Soundful, are also offering AI music generation for licensing. Their user-friendly interfaces are built for social media content creators that want to license music at a lower cost. Users create an account, choose a genre, generate audio, and then download the original music for their projects.
Large digital content libraries have been acquiring AI music generator startups in recent years. Apple bought up a London company called AI Music in February 2022, while ShutterStock purchased Amper Sounds in 2020. This suggests a large upcoming shift in how licensed music is created and distributed.
At the periphery of these developments, we’re beginning to see robotics teams that have successfully integrated AI music generators into singing, instrument-playing, animatronic AI music robots like Shimon and Kuka. Built by the Center for Music Technology at Georgia Tech, Shimon has performed live with jazz groups and can improvise original solos thanks to the power of artificial intelligence.
Stay tuned for future articles, with updates on this evolving software and robotics ecosystem.
MIDI art is a fun, emerging technique that’s taking the internet by storm. This unusual approach to songwriting centers around creating 2-D art from colored MIDI notes in the piano roll of a Digital Audio Workstation, displayed to the listener for their amusement.
Not all MIDI art sounds good, but it usually expresses a visual concept. The emergence of MIDI art owes its success in large part to video content on Youtube and other social media channels. Live MIDI artists Glaysys even won second prize in the 2022 MIDI Innovation Awards.
To learn and watch some videos, check out this article on MIDI art at the Audiocipher site. Or if you prefer to jump to a specific topic, here are some anchors you can use:
MIDI Association contributor Walter Werzowa was featured on CNN today (Dec 26, 2021)
One of the best things about the MIDI Association is the great people we get to meet and associate with. After all they don’t call it an association for nothing. This year during May Is MIDI Month, we were putting together a panel on MIDI and music therapy and Executive Board member Kate Stone introduced us to Walter Werzowa.
So we were pleasantly surprised today when one of Walter’s latest projects was featured on Fareed Zakaria GPS show.
As I’ll detail on GPS in this week’s Next Big Idea, musicologists, composers & computer scientists have used AI to complete Beethoven’s unfinished 10th symphony.
We first got interested in Walter because of Healthtunes.org. HealthTunes® is an audio streaming service designed to improve one’s physical and mental health was founded by Walter in 2016. It uses Binural Beats.
Binaural beats and isochronic tones are embedded within our music (the low humming sound some may hear), which are two different methods used for brain wave entrainment. Binaural beats work by using two slightly different frequency tones sent to each ear. Isochronic tones use a single tone with a consistent beat being turned off and on regularly. Your body automatically reacts to both binaural beats and isochronic tones with a physiological response allowing one’s brain to reach a more desired mental state by influencing brain wave activity.
We soon learned that Walter had done many things in his career including memorable sonic branding themes from his company- musikvergnuegen. Vergnuegen could be translated as joy or fun and is used in the German word for amusement/theme park -vergnugungspark.
Almost everyone on the planet has heard his audio branding signatures. The Intel Boing and T mobile 5 note theme are all brilliant examples of simple mnemonics that could easily be described as ear worms.
By the way, the term ear worm comes from the German öhrwurm invented over 100 years ago to describe the experience of a song stuck in the brain.
T Mobile.mp3
Beethoven’s “finally finalized” 10th Symphony
But Walter’s latest project is perhaps his most impressive yet. He was part of a team of AI researchers and musicians that used AI to “finish” Beethoven’s Unfinished Symphony #10. How was MIDI involved? Like most AI music projects, the AI algorithm was trained using MIDI data of not only all of Beethoven’s completed symphonies, but all of his other works as well as works from Beethoven’s contemporaries that he would have listened to and been influenced by. You can watch the NBC’s Molly Hunter’s interview with Walter or just listen to the results of Walter’s work below.
Below is a link to the full Beethoven X symphony performance
Beethovens 10. Sinfonie am 9. Oktober ab 19 Uhr im kostenlosen Stream auf MagentaMusik 360. Das bisher unvollendete Stück von Ludwig van Beethoven wurde mithilfe einer AI (dt.: künstliche Intelligenz) nun zu Ende komponiert.
There’s a lot of excitement in the air – MIDI 2.0, VR/AR, spatial audio, flying taxis and Facebook’s own flight of fancy, Metaspace (it took me three goes to stop this being called Meatspace; perhaps aptly?).
Back in 1985 there was a similar air of expectation. MIDI had just been ratified by a quorum of MI and Pro Audio companies and I’d had a personal walk-through its immediate goals and capabilities from Dave Smith himself, riding high with Sequential Circuits in Silicon Valley. The initial goals might have been modest: connect two keyboards, play one and trigger the sound engine in both but even then ‘multi-timbralism’ was floated and the beginnings of how MIDI instruments could be connected to and controlled by a personal computer – a state of affairs that is not materially different almost 40 years later. It was entirely appropriate for Dave to call his first venture into softsynths, ‘Seer Systems’.
I’d just written my first Keyfax book and was also working as a keyboardist for John Miles, a supremely talented British pop star who’d had a string of hits in the UK, including the iconic Music, produced by Alan Parsons.
The first edition of Keyfax- the definitive guide to electronic keyboards
New Polydor signing Vitamin Z (‘zee’ for US readers but ‘zed’ for us Brits) wanted Alan to produce their debut album and Alan approached John to supplement the duo – singer Geoff Barradale and bass player Nick Lockwood – that Vitamin Z comprised. Duly, myself and our drummer, Barriemore Barlow of Jethro Tull fame, trouped down to Alan’s luxurious country house studio, The Grange, in the posh village of Benenden in Kent where Princess Anne had gone to school.
Julian Colbeck and Barriemore Barlow relax during the Vitamin Z sessions
The Grange was equally posh. Alan had a state of the start digital recording system based around the Sony PCM 3324, if memory serves. This was a freestanding system, not computer controlled, and nor did it have MIDI. At this time the worlds of ‘audio’ (i.e. regular recording) and the upstart MIDI, had nothing whatsoever to do with each other. It would be another four years before the world’s first Digital Audio Workstation would be introduced.
Steinberg Pro 24 – One of the first MIDI sequencers
MIDI (far from being as ubiquitous as it is now),was a keyboard player’s thing for those who had even noticed it at all, . I’d just picked up an Atari computer, which had MIDI built in, and had been testing out the Pro 24 ‘sequencer’ from a brand new German outfit called Steinberg. Alan, a geek – then and still now – was fascinated. There still wasn’t a huge amount of MIDI-connectable synths of the market.I’d had my trusty Roland Juno-60 converted to MIDI from Roland’s pre-MIDI DCB (Digital Communication Bus) and brought along a DX7 and, although my memory is a little hazy here, an early Ensoniq Mirage. But the cool thing was that we could record – and correct, change, quantize parts directly on the Atari. This was just revolutionary and mind-expanding. However, it wasn’t exactly what you’d call stable. Charlie Steinberg had given us his home number and it was quite possible that he and Manfred Rürup still worked out of their homes back then. But for many an evening we’d on the phone to Charlie, mainly trying to figure out synchronization issues. I remember on one call Charlie pronouncing what we’d certainly been experiencing and fearing for a while: “We do time differently,” he said, in his finest Hamburg accent. Ah, well that would certainly explain things
Julian Colbeck and Alan Parsons chat in 1988’s Getting The Most Of Of Home Recording – the precursor to their Art & Science Of Sound Recording video series and online course.
Things have changed a lot since those days in the 1980s of big hair and inexplicable even bigger shoulders. Alan continued with his amazing career as a producer and performing artist. Alan and I both moved to California.
I founded the company Keyfax NewMedia Inc. and in 1998 released the Phat Boy (yes, it was the 90’s) one of the first hardware MIDI controllers that could be used with a wide variety of synths and sofware.
Keyfax Phat-Boy MIDI Controller
But Alan and I continued our friendship and partnership and launched Alan Parsons’ Art and Science of Sound Recording. Because although the gear had changed and there were many more tools available to musicians and engineers, the core things that you needed to know to produce music hadn’t really changed at all.
Multi-platinum producer, engineer and artist Alan Parsons recently released his new single “All Our Yesterdays” and announces the launch of his new DVD and HD web video educational series entitled The Art and Science of Sound Recording, or “ASSR,” produced by Keyfax NewMedia Inc. The track was written and recorded during the making of ASSR, an in-depth educational series that highlights techniques in music production while giving a detailed overview of the complete audio recording process. The series is narrated by Billy Bob Thornton and will be available as a complete DVD set in July.
by LOS ANGELES, CA (PRWEB) JUNE 23, 2010
Special 50% Off Promo for the MIDI Association on the new ASSR On Line course
The knowledge that Alan has developed over his long and incredible career is available in a number of different mediums. There is videos, sessions files, books & DVDs, Live Training Events and now the newest incarnation On Line Courses on Teachable.
Legendary engineer and producer Alan Parsons began his career at Abbey Road, working with The Beatles on Let It Be and Abbey Road. Alan became one of the first ‘name’ engineers thanks to his seminal engineering work on Dark Side Of The Moon – still an audiophile’s delight almost 50 years later.
Alan is an early adopter of technology by nature: Looping, Quadrophonic, Ambisonics, MIDI, digital tape, sampling, DAWs, and Surround 5.1 with which he won the Best Immersive album GRAMMY in 2019. ASSR-Online is Alan’s Bible of Recording that looks at all aspects of music production from soundproofing a room to the equipment including monitors and microphones, all the processes including EQ, compression, reverbs, delays and more, and multiple recording situations such as recording vocals, drums, guitars. keyboards, a choir, beatmaking, and of course MIDI. Based on more than 11 hours of custom video, ASSR-Online is a complete course in recording, featuring more than 50 projects, tasks, and assignments with four raw multitracks to help you develop your recording skills to a fully professional level.
Thru November 15 get 50% off Alan Parsons’ ASSR-Online Recording and Music Production course through MIDI.org!
Go to the link below and add the code MIDI50 during checkout.
MIDI Controllers (Products, Physical Controls, and Messages)
Unfortunately the word controller is overburdened in the MIDI lexicon and probably the most overused word in the world of MIDI.
It can refer to three different things- products, physical controls and messages.
MIDI Controller=Product
People can say MIDI Controller and they mean a product like a IK MultiMedia iRig Keys I/O 25 Controller Keyboard.
They might say ” I’m using The Roland A88 MK2 as my MIDI Controller”.
MIDI Controller=Physical Control
But the word Controller is also used to refer to physical controls like a Modulation Wheel, a Pitch Bend wheel, a Sustain Pedal, or a Breath Controller (yes, there is that word again).
The word Controller is also used to describe the MIDI messages that are sent. So you could say “I’m sending Controller #74 to control Filter Cutoff’.
In fact, there are multiple types of MIDI messages that are sometimes referred to as “Controllers”:
MIDI 1.0 Control Change Messages
Channel Pressure (aftertouch)
Polyphonic Key Pressure (poly pressure)
Pitch Bend
Registered Parameter Numbers (RPNs) in MIDI 1.0 that equate to the 16,834 Registered Controllers in MIDI 2.0
Non-Registered Parameter Numbers (NRPNs) in MIDI 1.0 that equate to the 16,834 Assignable Controllers in MIDI 2.0
MIDI 2.0 Registered Per-Note Controllers
MIDI 2.0 Assignable Per-Note Controllers
To make things a bit more convoluted, the MIDI 1.0 specification contains certain MIDI Messages that are named after physical controls specifically-
Decimal Hex Function
1 0x01 Modulation Wheel or Lever
2 0x02 Breath Controller
4 0x04 Foot Controller
11 0x0B Expression Controller
64 0x40 Damper Pedal on/off (Sustain)
66 0x42 Sostenuto On/Off
67 0x43 Soft Pedal On/Off
But these are MIDI Control Change (CC) messages, not the actual physical controllers themselves.
However most products hardwire the Mod Wheel to CC#1 and set the factory default of Damper to be assigned to CC#64, etc.
Also on most MIDI products you can set your physical controller Mod Wheel to send different CC messages (for example Control Change #2 Breath Controller or Control Change #11 Expression).
MOD WHEEL is a physical controller that always generates a specific message cc001 Modulation Wheel. cc001 (Control Change) can be applied to most any function, it does not have a fixed function. It is most often used to apply Modulation depth to pitch (vibrato) but that must be assigned to the wheel on a per program basis.
by Yamaha Product Specialist Phil Clendennin ( AKA Bad Mister)
So a MIDI Controller has a MIDI Controller that sends a MIDI Controller! Or translated into a sentence that makes more sense-
A IK MultiMedia iRig Keys I/O 25 has a Mod Wheel that sends Control Change (CC) #11 Expression.
The important thing to remember.
The word MIDI controller can refer to three different things.
A type of product- The IK MultiMedia iRig Keys I/O 25 is a MIDI Controller
A physical control- The Mod Wheel on the IK MultiMedia iRig Keys I/O 25 is a MIDI Controller
A MIDI Control Change Message- The Mod Wheel on the IK MultiMedia iRig Keys I/O 25 is sending MIDI Controller #11 Expression
The MPE specification was adopted by The MIDI Association at the 2018 Winter NAMM show.
MPE is designed for MIDI Devices that allow the performer to vary the pitch and timbre of individual notes while playing polyphonically. In many of these MIDI Devices, pitch is expressed by lateral motion on a continuous playing surface, while individual timbre changes are expressed by varying pressure, or moving fingers towards and away from the player.
MPE specifies the MIDI messages used for these three dimensions of control — regardless of how a particular controller physically expresses them — and defines how to configure Devices to send and receive this “multidimensional control data” for maximum interoperability.
MIDI Pitch Bend and Control Change messages are Channel Messages, meaning they affect all notes assigned to that Channel. To apply Channel Messages to individual notes, an MPE controller assigns each note its own Channel.
Ableton added MPE support to Ableton 11 giving Ableton users the ability to be more musically expressive.
What Is MPE?
MPE (MIDI Polyphonic Expression) allows you to control multiple instrument parameters simultaneously depending on how you press the notes on your MPE-capable MIDI controller.
With MPE you can change these individual values for every note in real-time:
Pitch Bend (horizontal movement)
Slide (vertical movement)
Pressure
MPE MIDI messages are displayed once you record or draw a note, and you can edit them at any time.
Keyboards and other controllers are no longer limited to up/down motions and sometimes pressure. The MPE specification accommodates multiple performance gestures within a single note. How hard you strike a key or pad; how much you move your fingers side to side or up and down; how much pressure you apply after striking a key; how quickly or slowly you release from the surface: all of these gestures suddenly become musical with MPE. For example, instruments can translate side-to-side motion to provide vibrato like on an acoustic string instruments. A tiny amount of pressure on a key can “swell” the volume, or add brightness, to each part of a brass section.
With MPE you don’t just play a note—you play with a note. Because of this it is an artistic breakthrough as well as a technological one. It endows electronic instruments with greater potential for expressiveness.
by Craig Anderton, Author and MIDI Association President
Add more feeling to your music
Edit your recorded MPE MIDI Messages
Select the MIDI clip and click the Note Expression tab in the Clip View Editor.
You can view each parameter by clicking the Show/Hide lane buttons.
Similar to editing automation, you can move breakpoints, copy/paste/delete them, mark them, or use the draw mode.
Morph between chords and add bends by connecting the curve of a note with any subsequent note.
Massive Technologies releases major update to AR Pianist with new MIDI and Audio features
Massive Technologies (MT) newest AR Pianist update shows the unique power of combining MIDI Data with AI and VR technologies and is an incredibly engaging combination of new technologies.
They gave The MIDI Association the inside scoop on their new update to AR Pianist.
One of the major new features is the ability to import MIDI files to create virtual performances.
We’re excited to announce that a major update of AR Pianist is to be released on May 25th. We’ve been working on this update tirelessly for the past two years.
The update brings our AI technology to users’ hands, and gives them the ability to learn any song by letting the app listen to it once through the device microphone.
Using AI, the app will listen to the audio, extract notes being played, and then show you a virtual pianist playing that song for you with step by step instructions.
The app also uses machine learning and augmented reality to project the virtual avatar onto your real piano, letting you experience the performance interactively and from every angle.
Users can also record their piano performance using the microphone (or MIDI), and then watch their performance turn into a 3D / AR virtual concert. Users can share it as a video now, and to VR / AR headsets later this year.
The update also features songs and content by “The Piano Guys”, along with a licensed Yamaha “Neo” designer piano.
by Massive Technologies
A.I. Generates 3D Virtual Concerts from Sound:
“To train the AI, we brought professionally trained pianists to our labs in Helsinki, where they were asked to simply play the piano for hours. The AI observed their playing through special hardware and sensors, and throughout the process the pianist and we would check the AI’s results and give it feedback or corrections. We would then take that feedback and use it as the curriculum for the AI for our next session with the pianist. We repeated that process until the AI results closely matched the human playing technique and style.”
by Massive Technologies
Massive Technologies used MIDI Association Member Google’s Tensor Flow to train their AI model.
The technology’s main potential is music education, for piano teachers to create interactive virtual lessons for remote teaching—or for virtual piano concerts, and film or games creators who want to incorporate a super-realistic pianist in their scenes.
The key to it all is MIDI
If you look at the work being done by Google, Yamaha, Massive Technologies, The Pianos Guys and others in the AI space, MIDI is central to all of those efforts.
Why? Because MIDI is the Musical Instrument Digital Interface so to connect with AI and Machine Learning algorithms you usually have to convert Music into MIDI.
How Does AR Pianist work and what can you do with it?
AR Pianist combines a number of proprietary Massive Technologies together.
Multi pitch recognition
Massive Technologies’ in house ML models can estimate pitch and extract chords from audio streams, on the fly, in realtime.
This allows you to convert audio files of solo piano recordings into MIDI data that the AI engine can analyze. Of course you can also directly import MIDI data.
Object pose estimation
Their proprietary models can estimate the 3d position and orientation of real instruments from a single photograph.
This allows you to point your mobile device’s camera at your 88 note keyboard. The app can then map your keyboard into 3D space for use with Augmented Reality.
Motion synthesis and 3D Animation Pipeline
MT developed new machine learning algorithms that can synthesize novel and kinematically accurate 3d musical performance from raw audio files, for the use in education and AR / VR. Their tools can perform advanced full body and hand inverse kinematics to fit the same 3d musical performance to different avatars.
This is the part that almost seems like magic.
The app can take a MIDI or Audio performance (the Audio performance should be piano only), analyze it and generate musically correct avatar performances with the correct fingerings and hand positions including complex hand crossovers like those often used in classical or pop music (think the piano part from Bohemian Rhapsody).
Music notation rendering, in 3D
Massive Technologies has built a notation rendering engine, that can be used to display music scores in 3D and inside virtual environments, including AR / VR.
This allows you to see the notation for the performances. Because the data is essentially MIDI like data you can slow the tempo down, set the app to wait for you to play the right note before moving forward and other practice techniques that are widely used in MIDI applications.
A.I. Plays Rachmaninoff Playing Himself (First Person View):
An audio piano roll recording of Rachmaninoff himself playing his famous Prelude, from 1919, reconstructed into 3d animation by Massive Technologies AI.
A virtual camera was attached to the virtual avatar’s head, where its movement is being driven by the AI, simulating eye gaze and anticipation.
Massive Technologies is Fayez Salka, M.D Medical Doctor, Musician, Software Developer and 3D Artist and
Anas Wattar, BCom Graduate from McGill University, Software Developer and 3D Artist.
AR Pianist is available for on the Apple App Store and Google Play store
The app is free to download and offers in app purchases for libraries of songs. You can check out Jon Schimdt of the Piano Guys virtual demoing the AR Pianist at any Apple retail store.
A DAW’s MIDI Plug-Ins Can Provide Solutions to Common Problems
In a world obsessed with audio plug-ins, MIDI plug-ins may not seem sexy—but with MIDI’s continued vitality, they remain very useful problem solvers. For an introduction to MIDI plug-ins, please check out the article Why MIDI Effects Are Totally Cool: The Basics.
Although processing MIDI data has existed since at least the heyday of the Commodore-64, the modern MIDI plug-in debuted when Cakewalk introduced the MFX open specification for Windows MIDI plug-ins. Steinberg introduced a wrapper for MFX plug-ins, and also developed a cross-platform VST format. MIDI plug-ins run the gamut from helpful utilities that supplement a program like MOTU Digital Performer, to beat-twisting effects for Ableton Live. After Apple Logic Pro X added Audio Units-based MIDI plug-ins, interest continued to grow. Typically, MIDI plug-ins insert into MIDI tracks similarly to how audio plug-ins insert into audio tracks (Fig. 1).
Figure 1: In Cakewalk by BandLab, you can drag MIDI plug-ins from the browser into a MIDI track’s effects inserts.
Unfortunately most companies lock MIDI plug-ins to their own programs. Therefore this article takes a general approach that describes typical problems you can solve with MIDI plug-ins, but note that not all programs have plug-ins that provide these functions, nor do all hosts support MIDI plug-ins.
Instant Quantization for Faster Songwriting
MIDI plug-ins are generally real-time and non-destructive (some can work offline as well). If you’re writing a song and craft a great drum groove that suffers from shaky timing, don’t dig into the quantization menu and start editing—insert a MIDI quantizing plug-in, set it for eighth or 16th notes, and keep grooving. You can always do the “real” edits later.
Create Harmonies, Map Drums, and Do Arpeggiations
If your host has a Transpose MIDI plug-in, it might do a lot more than audio transposition plug-ins—like transpose by intervals or diatonically, change scales in the process of transposing from one key to another, or create custom transposition maps that can map notes to drums. The image above shows a variety of MIDI plug-ins; clockwise from upper left is the Digital Performer arpeggiator, Live arpeggiator, Cubase microtuner, Live randomizer, Cubase step sequencer, Live scale constrainer, Digital Performer Transposer, Cubase MIDI Echo.
Filter Data
You’re driving two instruments from a MIDI controller, and want one to respond to sustain but not the other…or filter out pitch bend before it gets to one of the instruments. Data filtering plug-ins can implement these applications, but many can also create splits and layers. If the plug-in can save presets, you can instantly call up oft-used functions (like remove aftertouch data).
Re-Map Controllers
Feed your footpedal through a re-mapping plug-in to control breath control parameters, mod wheel, volume, aftertouch, and the like. There may also be an option to thin or randomize control data, or map data to a custom curve.
Process MIDI Data Dynamically
You can compress, expand, and limit MIDI data (to low, high, or both values). For example, a plug-in could specify that all values under a certain value adopt that value, or compress velocity dynamics by a ratio, like 2:1. While you don’t need a MIDI plug-in to do these functions (you can usually scale velocities, then add or subtract a constant using traditional MIDI processing functions), a plug-in is more convenient.
MIDI Arpeggiation Plug-Ins
Although arpeggiation isn’t as front and center in today’s music as it was when Duran Duran was tearing up the charts, it’s still valid for background fills and ear candy. With MIDI plug-in arpeggiator options like multiple octaves, different patterns, and rhythmic sync, arpeggiation is well worth re-visiting if you haven’t done so lately. Arpeggiators can also produce interesting patterns when fed into percussion tracks.
“Humanize” MIDI Parts so They Sound Less Metronomic
“Humanizer” plug-ins usually randomize parameters, like start times and/or velocities, so the MIDI timing isn’t quite so rigid. Personally, I think they’re more accurately called “how many drinks did the player have” because musicians tend not to create totally random changes. But taking a cue from that, consider teaming humanization with an event filter. For example if you have a string of 16th note hi-hat triggers, use an event filter to increase velocities that fall on the first note of a beat, and perhaps add a slight increase to the third 16th note in each series of four. Then if you humanize velocity slightly, you’ll have a part that combines conscious change with an overlay of randomness.
Go Beyond Traditional Echo
Compared to audio echo, MIDI echo can be far more flexible. Fig. 2 shows, among other MIDI plug-ins, Cakewalk’s MIDI Echo plug-in.
Figure 2: Clockwise from upper left, Logic Pro X Randomizer and Chord Trigger, Cakewalk Data Filter, Echo, and Velocity processor.
Much depends on a plug-in’s individual capabilities, but many allow variations on the echoes—change pitch as notes echo, do transposition, add swing (try that with your audio plug-in equivalent), and more. But if those options aren’t present, there’s still DIY potential because you can render the track with a MIDI plug-in, then tweak the echoes manually. MIDI echo makes it particularly easy to generate staccato, “dugga-dugga-dugga” synth parts that provide rhythmic underpinnings to many dance tracks; the only downside is that long, languid echoes with lots of repeats eat up synth voices.
Experiment with Adding Human “Feel”
A Shift MIDI plug-in shifts note start times forward or backward. This benefits greatly from MIDI plug-ins’ real-time operation because you can listen to the changes in “feel” as you move, for example, a snare hit ahead or behind the beat somewhat.
Remove Glitches
“De-glitcher” plug-ins remove duplicate events that hit on the same beat, filter out notes below a specific duration or velocity, “de-flam” notes to move the start times of multiple out-of-sync notes to the average start time, or other options that help clean up pollution from MIDI data streams.
Constrain Notes to a Scale, and Nuke Wrong Notes
Plug-ins that can snap to scale pull errant notes into a defined scale—just bash away at a keyboard (or have a cat walk across it), and there won’t be any “wrong” notes. Placing this after a randomizer can be very interesting, as it offers the benefits of randomness yet notes are always constrained to particular scales.
Analyze Chords
Put this plug-in on a track, and it will read out the kind of chord made by the track’s notes. With ambiguous chords, the analyzer may display all voicings it recognizes. Aside from figuring out exactly what you played when you had a spurt of inspiration, for those using MIDI backing tracks an analyzer simplifies figuring out chord progressions.
Add an LFO to Just About Anything
Being able to change MIDI parameters rhythmically can add considerable interest and animation to synth modules and MIDI-controllable signal processors. Although some DAWs let you draw in periodic waveforms (and you can always take the time to create a library of MIDI continuous controller signals suitable for pasting into programs), a Continuous Controller generator provides these same functions in a much more convenient package.
The above functions are fairly common—but scratch beneath the surface, and you’ll find all kinds of interesting MIDI plug-ins, either bundled with hosts or available from third parties. Midiplugins.com lists MIDI plug-ins from various companies. Some of the links have disappeared into internet oblivion and some belong to zombie sites, but there are still plenty of potentially useful MIDI effects. More resources are available at midi-plugins.de, (the most current of the sites), and tencrazy.com. Happy data diving!
There’s more to life than audio echo – like MIDI echo
Although the concept of MIDI echo has been around for years, early virtual instruments often didn’t have enough voices to play back new echoes without stealing voices from previous echoes. With today’s powerful computers and instruments, this is less of a problem – so let’s re-visit MIDI echo.
Copy and Drag MIDI Tracks
It’s simple to create MIDI echo: Copy your MIDI track, and then drag the notes for the desired amount of delay compared to the original track. Repeat for as many echoes as you want, then bounce all the parts together (or not, if you think you’ll want to edit the parts further). In the screen shot above, the notes colored red are the original MIDI part, the blue notes are delayed by an eighth note, and the green notes are delayed by a dotted-eighth note. The associated note velocities have also been colored to show the velocity changes for the different echoes.
Change Note Velocities for More Variety
But wait—there’s more! You can not only create polyrhythmic echoes, but also change velocities on the different notes. Although the later echoes can have different dynamics, there’s no law that says all the changes must be uniform. Nor do you have to follow the standard “rules” of echo—consider dragging very low-velocity notes ahead of the beat to give pre-echo.
MIDI Plug-Ins for Echo
Some DAWs that support MIDI plug-ins offer MIDI echo, which sure is convenient. Even if your doesn’t, though, you can always create them manually, as described above. The bottom line is that there are many, many possibilities with MIDI echo…check them out.
Just like MIDI itself, the Korg SQ64 hardware sequencer focuses on connectivity and control
The SQ-64 is unique in its ability to drive MIDI and modular synths letting you create music with all your synth gear without the need for a computer, tablet or cellphone. It features four hardware based sequencer tracks each with up to 64-step sequences
The first three tracks support up to 8-note polyphony, with Mod, Pitch, and Gate outputs for each track. The fourth track is designed to be a monophonic 16-part sequencer, driving eight separate Gate outputs along with eight different MIDI notes — perfect for driving a drum machine or drum synthesis modules. So in total, you can send three polyphonic sequences to three different devices via MIDI or CV/Gate/Mod, plus a monophonic sequence with up to eight different MIDI notes to a MIDI device, plus a monophonic sequence with up to eight different parts sent out via Gate outputs. That’s a lot of creative potential for a compact hardware sequencer!
by Sweetwater
Blending CV/Gate and MIDI control in one portable box
It’s the unique combination of CV control, MIDI & Audio sync, and polyphonic multitrack sequencing that makes the Korg’s SQ-64 special. Check out Korg’s James Sajeva as he demos the SQ64 with a rack of modular synths.
More Unique Step Sequencing features
The SQ64 Step Sequencer has some unique features that are really only available with Step Sequencer- you can set the steps to play backwards (Reverse), play from the beginning to the end and then turn around (Bounce), Stochastic (randomly pick between one step forward, skip one forward, backwards or repeat step) or Random (randomly pick from all the available steps in the track). When you combine that with the ability to do Polyrhythms (Each track can have different lengths) and changing the time divisions of each track independently (1/32, 1/16, 1/8, 1/4 plus Triplets) and there is an endless amount of creative fun available.
DAW software, like Ableton Live, Logic, Pro Tools, Studio One, etc. isn’t just about audio. Virtual instruments that are driven by MIDI data produce sounds in real time, in sync with the rest of your tracks. It’s as if you had a keyboard player in your studio who played along with your tracks, and could play the same part, over and over again, without ever making a mistake or getting tired.
MIDI-compatible controllers, like keyboards, drum pads, mixers, control surfaces, and the like, generate data that represents performance gestures (fig. 1). These include playing notes, moving controls, changing level, adding vibrato, and the like. The computer then uses this data to control virtual instruments and effects.
Figure 1: Native Instruments’ Komplete keyboards generate MIDI data, but can also edit the parameters of virtual instruments.
Virtual Instrument Basics
Virtual instruments “tracks” are not traditional digital audio tracks, but instrument plug-ins, triggered by MIDI data. The instruments exist in software. You can play a virtual instrument in real time, record what you play as data, edit it if desired, and then convert the virtual instrument’s sound to a standard audio track—or let it continue to play back in real time.
Virtual instruments are based on computer algorithms that model or reproduce particular sounds, from ancient analog synthesizers, to sounds that never existed before. The instrument outputs appear in your DAW’s mixer, as if they were audio tracks.
Why MIDI Tracks Are More Editable than Audio Tracks
Virtual instruments are being driven by MIDI data, so editing the data driving an instrument changes a part. This editing can be as simple as transposing to a different key, or as complex as changing an arrangement by cutting, pasting, and processing MIDI data in various ways (fig. 2).
Figure 2: MIDI data in Ableton Live. The rectangles indicate notes, while the line along the bottom show the dynamics for the various notes. All of this data is completely editable.
Because MIDI data can be modified so extensively after being recorded, tracks triggered by MIDI data are far more flexible than audio tracks. For example, if you record a standard electric bass part and decide you should have played the part with a synthesizer bass instead, or used the neck pickup instead of the bridge pickup, you can’t make those changes. But the same MIDI data that drives a virtual bass can just as easily drive a synthesizer, and the virtual bass instrument itself will likely offer the sounds of different pickups.
How DAWs Handle Virtual Instruments
Programs handle virtual instrument plug-ins in two main ways:
The instrument inserts in one track, and a separate MIDI track sends its data to the instrument track.
More commonly, a single track incorporates both the instrument and its MIDI data. The track itself consists of MIDI data. The track output sends audio from the virtual instrument into a mixer channel.
Compared to audio tracks, there are three major differences when mixing with virtual instruments:
The virtual instrument’s audio is typically not recorded as a track, at least initially. Instead, it’s generated by the computer, in real time.
The MIDI data in the track tells the instrument what notes to play, the dynamics, additional articulations, and any other aspects of a musical performance.
In a mixer, a virtual instrument track acts like a regular audio track, because it’s generating audio. You can insert effects in a virtual instrument’s channel, use sends, do panning, automate levels, and so on.
However, after doing all needed editing, it’s a good idea to render (transform) the MIDI part into a standard audio track. This lightens the load on your CPU (virtual instruments often consume a lot of CPU power), and “future-proofs” the part by preserving it as audio. Rendering is also helpful in case the instrument you used to create the part becomes incompatible with newer operating systems or program versions. (With most programs, you can retain the original, non-rendered version if you need to edit it later.)
The Most Important MIDI Data for Virtual Instruments
The two most important parts of the MIDI “language” for mixing with virtual instruments are note data and controller data.
Note data specifies a note’s pitch and dynamics.
Controller data creates modulation signals that vary parameter values. These variations can be periodic, like vibrato that modulates pitch, or arbitrary variations generated by moving a control, like a physical knob or footpedal.
Just as you can vary a channel’s fader to change the channel level, MIDI data can create changes—automated or human-controlled—in signal processors and virtual instruments. These changes add interest to a mix by introducing variations.
Instruments with Multiple Outputs
Many virtual instruments offer multiple outputs, especially if they’re multitimbral (i.e., they can play back different instruments, which receive their data over different MIDI channels). For example, if you’ve loaded bass, piano, and ukulele sounds, each one can have its own output, on its own mixer channel (which will likely be stereo).
However, multitimbral instruments generally have internal mixers as well, where you can set the various instruments’ levels and panning (fig. 3). The mix of the internal sounds appears as a stereo channel in your DAW’s mixer. The instrument will likely incorporate effects, too.
Figure 3: IK Multimedia’s SampleTank can host up to 16 instruments (8 are shown), mix them down to a stereo output, and add effects.
Using a stereo, mixed instrument output has pros and cons.
There’s less clutter in your software mixer, because each instrument sound doesn’t need its own mixer channel.
If you load the instrument preset into a different DAW, the mix settings travel with it.
To adjust levels, the instrument’s user interface has to be open. This takes up screen space.
If the instrument doesn’t include the effects plug-ins needed to create a particular sound, then use the instrument’s individual outputs, and insert effects in your DAW’s mixer channels. (For example, using separate outputs for drum instruments allows adding individual effects to each drum sound.)
Are Virtual Instruments as Good as Physical Instruments?
This is a question that keeps cropping up, and the answer is…it depends. A virtual piano won’t have the resonating wood of a physical piano, but paradoxically, it might sound better in a mix because it was recorded with tremendous care, using the best possible microphones. Also, some virtual instruments would be difficult, or even impossible, to create as physical instruments.
One possible complaint about virtual instruments is that their controls don’t work as smoothly as, for example, analog synthesizers. This is because the control has to be converted into digital data, which is divided into steps. However, the MIDI 2.0 specification increases control resolution dramatically, where the steps are so minuscule that rotating a control feels just like rotating the control on an analog synthesizer.
MIDI 2.0 also makes it easier to integrate physical instruments with DAWs, so they can be treated more like virtual instruments, and offer some of the same advantages. So the bottom line is that the line between physical and virtual instruments continues to blur—and both are essential elements in today’s recordings.
This workshop is part of a series of monthly free live events about MIDI organised by the Music Hackspace
Date & Time: Tuesday 27th April 6pm UK / 7pm Berlin / 1pm NYC / 10am LA
Level: Beginner
Ableton Live offers a vast playground of musical opportunities to create musical compositions and productions. Live’s native MIDI FX provides a range of tools to allow the composer and producer to create ideas in a myriad of ways. Max For Live complements these tools and expands musical possibilities. In this workshop you will creatively explore and deploy a range of MIDI FX in a musical setting. This workshop aims to provide you with suitable skills to utilise the creative possibilities of MIDI FX in the Ableton Live environment.
Session Learning Outcomes
By the end of this session a successful student will be able to:
Identify and deploy MIDI FX
Explore native and M4L MIDI FX in Live
Render the output of MIDI FX into MIDI clips for further manipulation
Apply MIDI FX to create novel musical and sonic elements
Session Study Topics
Using MIDI FX
Native and M4L MIDI FX
Rendering MIDI FX outputs
Creatively using MIDI FX
Requirements
A computer and internet connection
A web cam and mic
A Zoom account
Access to a copy of Live Suite with M4L (i.e. trial or full license)
About the workshop leader
Mel is a London based music producer, vocalist and educator.
She spends most of her time teaching people how to make music with Ableton Live and Push. When she’s not doing any of the above, she makes educational content and helps music teachers and schools integrate technology into their classrooms. She is particularly interested in training and supporting female and non-binary people to succeed in the music world.
MIDI Polyphonic Expression (MPE) offers a vast playground of musical opportunities to create musical compositions and productions. Live 11 supports a range of MPE tools to allow the composer and producer to create ideas in a myriad of ways. In this workshop you will creatively explore and deploy a range of MPE techniques in a musical setting. This workshop aims to provide you with suitable skills to utilise the creative possibilities of MPE in the Ableton Live environment.
Session Learning Outcomes
By the end of this session a successful student will be able to:
Identify the role and function of MPE
Explore MPE compatible devices in Live
Utilize MPE controllers within Live 11
Apply MPE to create novel musical and sonic elements
Session Study Topics
Using MPE
MPE devices in Live
MPE controllers
Creatively using MPE
Requirements
A computer and internet connection
A web cam and mic
A Zoom account
Access to a copy of Live 11 (i.e. trial or full license)
About the workshop leader
Mel is a London based music producer, vocalist and educator.
She spends most of her time teaching people how to make music with Ableton Live and Push. When she’s not doing any of the above, she makes educational content and helps music teachers and schools integrate technology into their classrooms. She is particularly interested in training and supporting female and non-binary people to succeed in the music world.
Whether it was the first introduction of MIDI at the 1983 NAMM to the adoption of MIDI 2.0 at the NAMM 2020, the NAMM (the National Association of Music Merchants has always been a part and a partner in our shared journey together.
At Winter NAMM we always hold a joint meeting between The MIDI Association and AMEI (the Japanese Association of Musical Electronics Industry) which oversees the MIDI spec in Japan. We also hold our Annual General Meeting where the MIDI Association corporate members meet, adopt new specifications and discuss plans for the next year.
This year is different because NAMM is holding an all virtual event called Believe In Music. The event opens on Monday, January 11, 2021 but most of the events take place the week of January 18.
We decided to try to keep thing as normal as possible so here is the schedule of MIDI Association events for Believe in Music week.
MPE is a relatively new MIDI specification, the universal protocol for electronic music. MPE allows digital instruments to behave more like acoustic instruments in terms of spontaneous, polyphonic sound control. So players can modulate parameters like timbre, pitch, and amplitude — all at the same time.
Join Audio Modeling, Keith McMillan Industries, moForte and ROLI and other MPE companies in an exploration of MIDI Polyphonic Expression.
Profile ConfigurationMIDI gear can now have Profiles that can dynamically configure a device for a particular use case. The MIDI Association has adopted our first Profile – Default Controller Mapping and is considering Profiles for Orchestral Articulations, Drawbar Organ, Guitar, Piano, DAW Control, Effects and more.Property ExchangeWhile Profiles set up an entire device, Property Exchange messages provide specific, detailed information sharing. These messages can discover, retrieve, and set many properties like preset names, individual parameter settings, and unique functionalities. For example, your recording software could display everything you need to know about a synthesizer onscreen, effectively bringing hardware synths up to the same level of recallability as their software counterparts.
Property Exchange will bring the same level of recallability that soft synths have to hardware MIDI products
When MIDI first started there was only one transport- the 5 Pin Din cable.But soon there were many different ways to send MIDI messages over USB, RTP, Firewire, and many more cables and transports. But none has been more transformative than MIDI-BLE because it allows you to send MIDI wirelessly over Bluetooth freeing products and performers from the restriction of being tethered to a cable. Join Aodyo, CME, Novalia, Roland, Quicco, Yamaha, and other BLE companies in a discussion of the benefits of BLE-MIDI.
DJ Qbert’s BLE MIDI Interactive Album Cover by Novalia
MIDI 2.0 is bi-directional and changes MIDI from a monologue to a dialog. For example, with the new MIDI-CI (Capability Inquiry) messages, MIDI 2.0 devices can talk to each other, and auto-configure themselves to work together.
Higher Resolution, More Controllers and Better Timing
To deliver an unprecedented level of musical and artistic expressiveness, MIDI 2.0 re-imagines the role of performance controllers, Controllers are now easier to use, and there are more of them: over 32,000 controllers, including controls for individual notes. Enhanced, 32-bit resolution gives controls a smooth, continuous, “analog” feel. New Note-On options were added for articulation control and precise note pitch.
MIDI 2.0 has MIDI 1.0 inside it making translation back and forth easy
2020 was a pretty tough year and everyone was affected by the events that shaped the world.
But 2020 had its positive moments too. So we’d like to focus on the good things that happened during 2020 in the MIDI Association.
At the January 2020 NAMM show, the MIDI Association and AMEI officially adopted MIDI 2.0.
On February 20, 2020 (02-20-2020) we published the first five Core MIDI 2.0 specifications to the world.
In April, the MIDI.org website was selected by the United States Library of Congress for inclusion in the historic collection of Internet materials related to the Professional Organizations for Performing Arts Web Archive.
During May Is MIDI Month, we raised $18,000 and committed to spend that money on people affected by the global pandemic.
In June, at WWDC, Apple announced Big Sur (MacOS 11.0) which includes MIDI-CI support. The OS was released in November. Also in June, the USB-IF published the USB MIDI 2.0 specification.
In September, we did a webinar at the International Game Developers Association on MIDI 2.0 for our Interactive Audio Special Interest Group.
In October, we published a new Specifications area of our website and we have now published 15 MIDI 2.0 specifications.
In December, we announced our NAMM Believe In Music Week participation and the first annual MIDI Innovation Awards.
So in the midst of one of the challenging years in history, we made huge progress in moving MIDI (and the MIDI Association) forward.
To help celebrate, we have arranged for a discount on a great book on MIDI and free attendance at NAMM’s Believe In Music week for all our MIDI Association members,
Welcome to 2021, it is going to be a very significant year in the history of MIDI.
We make Live, Push and Link — unique software and hardware for music creation and performance. With these products, our community of users creates amazing things. Ableton was founded in 1999 and released the first version of Live in 2001. Our products are used by a community of dedicated musicians, sound designers, and artists from across the world.
Making music isn’t easy. It takes time, effort, and learning. But when you’re in the flow, it’s incredibly rewarding.We feel the same way about making Ableton products. The driving force behind Ableton is our passion for what we make, and the people we make it for.
Song Maker Kit
The ROLI Songmaker Kit is comprised of some of the most innovative and portable music-making devices available. It’s centered around the Seaboard Block, a 24-note controller featuring ROLI’s acclaimed keywave playing surface. It’s joined with the Lightpad Block M touch controller, and the Loop Block control module, for comprehensive control over the included Equator and NOISE software. Complete with a protective case, the ROLI Songmaker Kit is a powerful portable music creation system.
The Songmaker Kit also included Ableton Live Lite and Ableton is also a May MIDI Month platinum sponsor.
Brothers Marco and Jack Parisi recreate a Michael Jackson classic hit
Electronic duo PARISI are true virtuosic players of ROLI instruments, whose performances have amazed and astounded audiences all over the world — and their latest rendition of Michael Jackson’s iconic pop hit “Billie Jean” is no exception.
Sometimes you just need to relax and do something cool.
So on Labor day weekend 2020 we shared this video from MEZERG enjoying some cool watermelon, some bright sun and a dip in the pool.
Oh yeah, and MIDI of course!
Want to try it yourself ? Playtronic makes it possible
Playtron is a new type of music device.
Connect Playtron to fruits and play electronic music using online synthesizers or use it as a MIDI controller with any music software and conductive objects.
Buy Playtron or Touchme, two gadgets that lets you play music on any object. We are an international studio dedicated to creating meaningful interactive audio experiences, in collaboration with brands, marketers, museums, galleries, and artists.
The best new way to learn piano. Learning with flowkey is easy and fun. Practice notes and chords interactively and receive instant feedback.
The idea behind flowkey is simple: “learn piano with songs you love.” And the flowkey app makes it easy to learn your favorite songs, whether your level is that of a Beginner, Intermediate, Advanced or Pro piano player!
Discover fascinating piano arrangements tailored to your level. Get started today and play your first song within minutes.
Click on the links below to see the Yamaha keyboards that qualify in your area.
New presets from Jordan Rudess and more for Mac/PC and iOS.
Jordan Rudess recently took the stage with Deep Purple for a festival performance in Mexico City using Hammond B-3X as their sole organ instrument. With great success, the Hammond B-3X fit seamlessly into the performance, nailing every organ sound the band has built their sound upon. Jordan and IK product manager, Erik Norlander, created 24 custom presets for the show with the idea to also release them to all Hammond B-3X users. The presets are automatically installed with the 1.3 update.
– 24 new Jordan Rudess Deep Purple presets – Compatibility with iPad preset sharing – Controllers are now received only on the assigned channels – Pitch bend range is now stored globally
iPad version:
– 24 new Jordan Rudess Deep Purple presets – New share function for importing and exporting presets with desktop version and other iPads – New restore factory presets function – Controllers are now received only on the assigned channels – Pitch bend range is now stored globally
Update your software now to gain all of these added features!
Audio Modeling has been coming out with more and more physical modeled instruments that add incredible realism and expressiveness. Recently they released the Solo Brass Bundle.
You can buy either individual instruments or save money by buying the entire bundle.
We make Live, Push and Link — unique software and hardware for music creation and performance. With these products, our community of users creates amazing things. Ableton was founded in 1999 and released the first version of Live in 2001. Our products are used by a community of dedicated musicians, sound designers, and artists from across the world.
Making music isn’t easy. It takes time, effort, and learning. But when you’re in the flow, it’s incredibly rewarding. We feel the same way about making Ableton products. The driving force behind Ableton is our passion for what we make, and the people we make it for.
Want to connect modular hardware to Ableton Live? There are a number of ways to go about this depending on what software and hardware you have. In this article, we break down the different methods and explain the gear you might need.
Live is fast, fluid and flexible software for music creation and performance. It comes with effects, instruments, sounds and all kinds of creative features—everything you need to make any kind of music.
Create in a traditional linear arrangement, or improvise without the constraints of a timeline in Live’s Session View. Move freely between musical elements and play with ideas, without stopping the music and without breaking your flow.
Ableton and Max for Live
Max For Live puts the vast creative potential of the Max development environment directly inside of Live. It powers a range of instruments and devices in Live Suite. And for those who want to go further, it lets you customize devices, create your own from scratch, and explore another world of devices produced by the Max For Live community.
Ableton makes Push and Live, hardware and software for music production, creation and performance. Ableton´s products are made to inspire creative music-making.
We have actively participated in creating the MIDI 2.0 specifications in the MIDI Manufacturers Association for many years. This year, some specifications will be finalized, and the Bome products will learn new MIDI 2.0 features along that path. The main focus will be on bridging MIDI 1.0 gear with the MIDI 2.0 world: proxying and translation. Existing BomeBox owners will also benefit from these new features by way of free firmware upgrades.
by Florian Bome
The BomeBox is a versatile hardware MIDI router, processor, and translator in a small, robust case. Connect your MIDI gear via MIDI-DIN, USB, Ethernet, and WiFi to the BomeBox and benefit instantly from all its functions. It’s a solution for your MIDI connection needs on stage or in the studio.
In conjunction with the desktop editor software Bome MIDI Translator Pro (sold separately), you can create powerful MIDI mappings, including layerings, MIDI memory, and MIDI logic. A computer is only needed for creating the mapping. Once it is loaded into the BomeBox, a computer is not necessary for operation.
BomeBox Overview
BomeBox Features
Configuration
The BomeBox is configured via a web browser. Just enable the integrated WiFi Hot Spot, connect your cell phone, tablet, or computer to it, and open a web browser to access the easy-to-use web configuration.
MIDI DIN
Connect your MIDI gear to the two standard MIDI DIN input and output ports. If you need more MIDI-DIN ports, use the MIDI Host port!
USB Host
The USB Host port allows you to connect any (class compliant) USB-MIDI device to the BomeBox, and use the advanced MIDI router and processing.
USB Hubs
Using a USB hub, you can connect even more USB-MIDI devices to a BomeBox. The MIDI Router allows fine grained routing control for every connected MIDI device individually.
MIDI Router
The integrated MIDI Router gives you full control over which MIDI device talks to which other MIDI device connected to the BomeBox. And if you need more fine grained filtering, or routing by MIDI channel, note number, etc., see Processing below.
Network MIDI Support
The BomeBox has two Ethernet ports. You can use Ethernet to directly connect BomeBox to BomeBox or to a computer. Using the Bome Network tool (see below), all BomeBoxes are auto-discovered. Once set up (“paired”), Network MIDI connections are persistent across reboots and BomeBox power cycles.
Wireless MIDI
The BomeBox’ integrated WiFi HotSpot can also be used for wireless MIDI connections to computers and/or to other BomeBoxes. You can also configure the BomeBox to be a WiFi client for integration into existing WiFi networks.
Processing
The powerful MIDI processing of Bome MIDI Translator Pro is available in the BomeBox. Hundreds of thousands of processing entries can be stored on the BomeBox.
Incoming Actions:
MIDI messages
Keystrokes (on QWERTY keyboard or number pad)
Data on Serial Port
Timed events
Enable/disable translation preset
Scripting (“Rules”):
A sequence of rules can be defined to be processed if the incoming action matches:
assignments of variables, e.g. pp = 20
simple expressions, e.g. pp = og + 128
labels and goto, e.g. goto “2nd Options”
conditional execution, e.g. IF pp < 20 THEN do not execute Outgoing Action
Outgoing Actions:
Send MIDI messages
Send bytes or text to Serial Ports
Create/start/stop timer
Enable/disable translation preset
Open another translation project
Keystroke (QWERTY) Input Support
Connect a (wireless) computer keyboard or a number pad to the BomeBox, then use the processing capabilities to convert to MIDI or trigger other actions! Really? Yes! and it’s useful… sometimes!
RS-232 Serial Port Support
The BomeBox also supports RS-232 adapters to be plugged into the USB host port. Now all processing actions are available in conjunction with serial ports, too: convert serial data to MIDI and vice versa. Route Serial port data via Ethernet. Or integrate older mixing consoles which only talk RS-232.
Allen & Heath Digital Mixer Support
Last, but not least, the BomeBox has built-in support for Allen & Heath mixers connected via Ethernet. They’re auto-discovered, and once you’ve paired them, all the MIDI routing and processing is available to the connected A&H mixer, too!
Bome Network
The standard edition of the Bome Network tool allows connecting your computer to one or more BomeBoxes via Ethernet and WiFi. Any MIDI application can send MIDI to the BomeBox and receive from it. On the BomeBox, you can configure which MIDI stream is sent to a particular connected computer.
BomeBoxes are auto-discovered, and once you’ve established a connection (“paired”), it is persistent across reboots and BomeBox power cycles.
If you like to set up network MIDI connections from computer to computer, use the Add-On Bome Network Pro.
Bome Network is available for Windows and for macOS.
Take your MIDI gear to the next level! Bome Software creates software and hardware for custom interaction with your MIDI devices and the computer. Used by live sound engineers, controllerists, DJ’s, theaters and opera houses, lighting engineers, beat boxers, performance artists, music and broadcasting studios, and many others.
We have actively participated in creating the MIDI 2.0 specifications in the MIDI Manufacturers Association for many years. This year, some specifications will be finalized, and the Bome products will learn new MIDI 2.0 features along that path. The main focus will be on bridging MIDI 1.0 gear with the MIDI 2.0 world: proxying and translation. Existing BomeBox owners will also benefit from these new features by way of free firmware upgrades.
by Florian Bome
The BomeBox is a versatile hardware MIDI router, processor, and translator in a small, robust case. Connect your MIDI gear via MIDI-DIN, USB, Ethernet, and WiFi to the BomeBox and benefit instantly from all its functions. It’s a solution for your MIDI connection needs on stage or in the studio.
In conjunction with the desktop editor software Bome MIDI Translator Pro (sold separately), you can create powerful MIDI mappings, including layerings, MIDI memory, and MIDI logic. A computer is only needed for creating the mapping. Once it is loaded into the BomeBox, a computer is not necessary for operation.
BomeBox Overview
BomeBox Features
Configuration
The BomeBox is configured via a web browser. Just enable the integrated WiFi Hot Spot, connect your cell phone, tablet, or computer to it, and open a web browser to access the easy-to-use web configuration.
MIDI DIN
Connect your MIDI gear to the two standard MIDI DIN input and output ports. If you need more MIDI-DIN ports, use the MIDI Host port!
USB Host
The USB Host port allows you to connect any (class compliant) USB-MIDI device to the BomeBox, and use the advanced MIDI router and processing.
USB Hubs
Using a USB hub, you can connect even more USB-MIDI devices to a BomeBox. The MIDI Router allows fine grained routing control for every connected MIDI device individually.
MIDI Router
The integrated MIDI Router gives you full control over which MIDI device talks to which other MIDI device connected to the BomeBox. And if you need more fine grained filtering, or routing by MIDI channel, note number, etc., see Processing below.
Network MIDI Support
The BomeBox has two Ethernet ports. You can use Ethernet to directly connect BomeBox to BomeBox or to a computer. Using the Bome Network tool (see below), all BomeBoxes are auto-discovered. Once set up (“paired”), Network MIDI connections are persistent across reboots and BomeBox power cycles.
Wireless MIDI
The BomeBox’ integrated WiFi HotSpot can also be used for wireless MIDI connections to computers and/or to other BomeBoxes. You can also configure the BomeBox to be a WiFi client for integration into existing WiFi networks.
Processing
The powerful MIDI processing of Bome MIDI Translator Pro is available in the BomeBox. Hundreds of thousands of processing entries can be stored on the BomeBox.
Incoming Actions:
MIDI messages
Keystrokes (on QWERTY keyboard or number pad)
Data on Serial Port
Timed events
Enable/disable translation preset
Scripting (“Rules”):
A sequence of rules can be defined to be processed if the incoming action matches:
assignments of variables, e.g. pp = 20
simple expressions, e.g. pp = og + 128
labels and goto, e.g. goto “2nd Options”
conditional execution, e.g. IF pp < 20 THEN do not execute Outgoing Action
Outgoing Actions:
Send MIDI messages
Send bytes or text to Serial Ports
Create/start/stop timer
Enable/disable translation preset
Open another translation project
Keystroke (QWERTY) Input Support
Connect a (wireless) computer keyboard or a number pad to the BomeBox, then use the processing capabilities to convert to MIDI or trigger other actions! Really? Yes! and it’s useful… sometimes!
RS-232 Serial Port Support
The BomeBox also supports RS-232 adapters to be plugged into the USB host port. Now all processing actions are available in conjunction with serial ports, too: convert serial data to MIDI and vice versa. Route Serial port data via Ethernet. Or integrate older mixing consoles which only talk RS-232.
Allen & Heath Digital Mixer Support
Last, but not least, the BomeBox has built-in support for Allen & Heath mixers connected via Ethernet. They’re auto-discovered, and once you’ve paired them, all the MIDI routing and processing is available to the connected A&H mixer, too!
Bome Network
The standard edition of the Bome Network tool allows connecting your computer to one or more BomeBoxes via Ethernet and WiFi. Any MIDI application can send MIDI to the BomeBox and receive from it. On the BomeBox, you can configure which MIDI stream is sent to a particular connected computer.
BomeBoxes are auto-discovered, and once you’ve established a connection (“paired”), it is persistent across reboots and BomeBox power cycles.
If you like to set up network MIDI connections from computer to computer, use the Add-On Bome Network Pro.
Bome Network is available for Windows and for macOS.
Take your MIDI gear to the next level! Bome Software creates software and hardware for custom interaction with your MIDI devices and the computer. Used by live sound engineers, controllerists, DJ’s, theaters and opera houses, lighting engineers, beat boxers, performance artists, music and broadcasting studios, and many others.
Wondering how to connect and control your hardware and software instruments in one place? Want to remotely control your Yamaha synthesizers and quickly recall presets on stage? How about attaching a lead sheet or music score with your own notes to a set of sounds?
Camelot Pro and Yamaha have teamed up with special features for Yamaha Synth owners.
REGISTER AND GET CAMELOT PRO FOR MAC OS OR WINDOWS
Download your Camelot Pro copy now with a special offer for Yamaha Synth owners: try the full version FREE for three months with an option to purchase for 40% off.
The promo is valid from:
The promo is valid from:
October 1, 2019 to September 30, 2020..
Upgrade your live performance experience to the next level:
Build your live set list with ease
Manage your Yamaha instruments using smart maps (no programming skills required!)
Combine, layer and split software instruments with your Yamaha synths
Get rid of standard connection limits with Camelot Advanced MIDI routing
Attach music scores or chords to any scene
The real slick thing about the combination of the Yamaha synths and Camelot Pro is that it allows you to very easily integrate your hardware synths and VST/AU plugins for live performance. The Yamaha synths connect to your computer via USB and integrate digital audio and MIDI. So just connect your computer to your Yamaha synth and then your Yamaha synth to your sound system. Camelot allows you to integrate your hardware and software in complex splits and layers and everything comes out the analog outputs of your Yamaha synth.
If you have Cubase/Nuendo, take advantage of the special 50% off promotion that Steinberg is running until June 30 on VST Connect Pro.
If you are a musician who works with producers who use Cubase/Nuendo, you can download VST Connect Performer for free and do studio sessions from the comfort of your home.
Music with no boundaries
VST Connect Pro lets you expand your studio from its physical location to cover the whole world. It allows any musician with a computer, an internet link and the free VST Connect Performer app to be recorded direct on your studio DAW, even if they are on a different continent, because VST Connect Pro makes distance irrelevant. Not only that, but you can see and talk to each other, while the producer has full control over the recording session at both ends of the connection, including cue mix and talkback level.
Multi-track remote recording
Is a musician you want to work with thousands of miles away? No problem. Remote record in real time and the uncompressed audio files are loaded automatically in the background. And you never need to worry about the Internet connection – all VST Connect Performer HD recordings are saved on the musician’s local hard drive and can be reloaded into VST Connect Pro at any time. Worried about security? Don’t be – the unique data encryption system means that your work will always stay yours.
MIDI around the world
VST Connect Pro allows you to record MIDI and audio data live from a VST instrument loaded into VST Connect Performer, anywhere in the world. The artist can even connect a MIDI controller, leaving the session admin to record the incoming MIDI data directly in Cubase, together with the audio stream from the VST instrument.
It also works both ways – send MIDI data from your Cubase project, via VST Connect, to any MIDI compatible instrument or VST instrument connected to a remote instance of VST Connect Performer and record the incoming audio signal.
VST Connect Performer
VST Connect Performer is a license-free, DAW-independent application for the musician being recorded to connect directly into your VST Connect Pro recording session. Available for PC, Mac or iPad, VST Connect Performer is remotely controlled from VST Connect Pro, freeing the musician to concentrate on their performance, be it vocals or an instrument sent as an audio signal. MIDI data or VST instruments can also be played in real time from VST Connect Performer to the VST Connect Pro session in real time. Meanwhile, VST Connect Manager helps you to maintain an overview of your recordings.
VST Connect offers you a fundamental kind of improvement that goes beyond the studio realm. Simply put, I have much more time for my kids now. For something as abstract as a feature in a DAW to have that kind of effect on one’s private life is quite an astonishing achievement. I can’t think of anything comparable.
Safe Spacer™ is a new, lightweight wearable device that helps workers and visitors maintain safe social distancing, enabling MI and other industries to safely re-open and operate with peace of mind.
Using Ultra-wideband technology, Safe Spacer runs wirelessly on a rechargeable battery and precisely senses when other devices come within 2m/6ft, alerting wearers with a choice of visual, vibrating or audio alarm.
Simple to use, Safe Spacer features a patent-pending algorithm that works immediately out of the box, with no set-up or special infrastructure needed and can be comfortably worn on a wristband, with a lanyard, or carried in a pocket. It offers ultra-precise measurement down to 10cm/4” – ten times more accurate than Bluetooth applications.
Ideal for factories, warehouses and offices, Safe Spacer can also be used by visitors of public spaces such as music schools, large retailers, auditoriums, workshops spaces and more. Engineered for fast, easy disinfection, it’s also waterproof. For minimal handling, Safe Spacer works wirelessly via NFC contactless technology or Bluetooth.
Each Safe Spacer also features a unique ID tag and built-in memory that can be optionally associated to workers’ names for tracing any unintentional contact, to keep organizations and their employees secure. To maintain the highest standard of privacy, no data other than the Safe Spacer ID and proximity is stored.
For advanced use, set-up and monitoring in workspaces, an iOS/Android app is also available to allow human resources or safety departments to associate IDs to specific workers, log daily tracing without collecting sensitive data, configure the alarms, set custom distance and alert thresholds, export log data and more.
We created Safe Spacer to help our Italian factory workers maintain safe distance during re-opening. It’s easy to use, fast to deploy, private and secure, so it can be used comfortably in any situation. We hope this solution helps other companies feel secure as they re-open, too.”
Way back in 1996 — around the time electricity was discovered and cell phones were the size of your average 4-slot toaster — two Italian engineers got together to solve a problem in a recording studio. Could you get the sound of classic analog gear from a computer? One of them said (in Italian, of course) “Could we emulate electronic circuits using DSP algorithms and feed an audio signal through the computer and get the same sound?” The answer was yes, the piece of gear they emulated was a vintage Abbey Road console, and a company was born.
Although that’s a pretty simplified version of how IK came to be, it reflects the driving philosophy behind all of our products: give musicians the tools they want/need to be creative and productive.
Recreate classic legendary products in the digital world and make them available to all musicians. But make them simple. Make them both aspirational and affordable. And make them for Musicians First.
iRig Keys I/O
The iRig® Keys I/O series evolves the concept of traditional controllers as the only one available on the market that integrates 25 or 49 full sized keys together with a fully-fledged professional audio interface featuring 24-bit audio up to 96kHz sampling rate, balanced stereo and headphone outputs, plus a combo input jack for line, instrument or mic input (with Phantom power.)
The first Lightning/USB compatible mobile MIDI interface that works with all generations of iOS devices, Android (via optional OTG to Mini-DIN cable) as well as Mac and PC. It features everything you loved about iRig MIDI but with even greater pocketability, connectivity and control.
Simply put, it’s the perfect MIDI solution for the musician on the move.
Syntronik is a cutting-edge virtual synthesizer that raises the bar in sound quality and flexibility thanks to the most advanced sampling techniques combined with a new hybrid sample and modeling synthesis engine. Watch as legendary keyboardist Jordan Rudess demonstrates his own Syntronik presets using the legendary synth powerhouse and SampleTank 3. See how a master keyboard player uses IK’s synth and workstation products to make great music.
The Bob Moog Foundation and the MIDI Association have had a close working relationship for many years. When we talked to Michelle Moog-Koussa, she graciously agreed to provide some materials on synthesizers for the May Is MIDI Month 2020 promotion.
The series of posters in this article are available for purchaseherewith the proceeds going to the Moog Foundation.
We have combined it with Ableton’s excellent interactive website for Learning Synths, Google’s Chrome Music Lab, and text from synth master Jerry Kovarsky, monthly columnist for Electronic Musician Magazine and author of Keyboard For Dummies.
Together these elements come together to make a great introduction to synthesis appropriate for students and musicians of all ages and levels. There are links to more information in each section.
MIMM 2020 Webinar The MiniMoog- The Synth That Changed the World Saturday, May 9, 10 am Pacific
Join us this Saturday at 10 am Pacific, 1 PM Eastern and 6 PM Greenwich on MIDI Llve to hear a panel discussion about the Minimoog, one of the most influential synths of all time.
Panelists include Michelle Moog Koussa and David Mash from the Bob Moog Foundation Board of Directors, Amos Gaynes and Steve Dunnington from Moog Music, and synth artists and sound designers Jack Hoptop, senior sound designer for Korg USA, Jordan Rudess, keyboardist for Dream Theatre and President of Wizdom Music (Makers of MorphWiz, SampleWiz, HarmonyWiz, Jordantron), and Huston Singletary, US lead clinician and training specialist for Ableton Inc.
href=”images/easyblog_articles/918/b2ap3_large_Jerry_Piano_Crop800.jpeg”
title=”Jerry Kovarsky-
Author of Keyboard For Dummies”>
Jerry Kovarsky-
Author of Keyboard For Dummies
href=”images/easyblog_articles/918/b2ap3_large_David-Mas_20200508-200241_1.jpeg”
title=”David Mash- President of the Bob Moog Foundation”>
David Mash- President of the Bob Moog Foundation
href=”images/easyblog_articles/918/b2ap3_large_Michelle.jpeg”
title=”Michelle Moog Koussa-Executive Director of the Bob Moog Foundation”>
Michelle Moog Koussa-Executive Director of the Bob Moog Foundation
href=”images/easyblog_articles/918/b2ap3_large_Jordan.jpeg”
title=”Jordan Rudess- Keyboardist for Dream Theatre “>
Jordan Rudess- Keyboardist for Dream Theatre
href=”images/easyblog_articles/918/b2ap3_large_Jack-Hotop-MF.jpeg”
title=”Jack Hotop-Senior Sound Designer for Korg USA”>
href=”images/easyblog_articles/918/b2ap3_large_Huston-headshot-with-Moog-.png”
title=”Huston Singletary-Lead Sound Designer for Ableton”>
Huston Singletary-Lead Sound Designer for Ableton
Composer Alex Wurman Provides Sonic Meditation For All Mothers as Part of Moogmentum in Place
The Bob Moog Foundation is proud to announce that EMMY® Award Winning composer Alex Wurman will perform a Facebook live stream concert to benefit the Foundation on Saturday, May 9th at 8pm (ET) / 5pm (PT), the eve before Mother’s Day. Wurman will inspire a worldwide audience with ASonic Meditation for All Mothers on a Yamaha Disklavier and a Moog Voyager synthesizer. The performance and accompanying question and answer, which will last approximately an hour, is meant to offer musical solace during these times of difficulty.
Listen to the Synth sound in the video and then check it out for yourself via the link below.
Learn about synthesizers via Ableton’s interactive website. Play with a synth in your browser and learn to use the various parts of a synth to make your own sounds.
A waveform is a visual representation of a continuous tone that you can hear. In analog synthesis the waveforms are somewhat simple and repetitious (with the exception of noise), because that was easier to generate electronically. But any sustaining, or ongoing sound can be analyzed and represented as a waveform. So any type of synthesizer has what are referred to as waveforms, even though they may be generated by sampling (audio recordings of sound), analog circuitry, DSP-generated signals, and various forms of digital sound manipulation (FM, Phase Modulation, Phase Distortion, Wavetables, Additive Synthesis, Spectral Resynthesis and much more). However they are created, we generally refer to the sonic building block of sound as a waveform.
Simply stated, an oscillator is the electronic device, or part of a software synthesizer design that generates a waveform. In an analog synthesizer it is a physical circuit made up of electronic components. In digital/DSP-driven synthesizers (including soft synths) it is a part of the software code that is instructed/coded to produce a waveform, or tone.
Learn about synthesizers via Ableton’s interactive website. Play with a synth in your browser and learn to use the various parts of a synth to make your own sounds.
Learn about synthesizers via Ableton’s interactive website. Play with a synth in your browser and learn to use the various parts of a synth to make your own sounds.
Harmonics are the building blocks of sound that make one instrument, or waveform sound different from another. The level of each harmonic as they exist in nature (the harmonic series) together determine the timbral “fingerprint” of a sound, so we can recognize the difference between a clarinet and a piano. Often these harmonics change in their volume level and tuning as a sound develops, and might decay away: the more this happens the more complex, and “alive” a sound will seem to our ears. You can now go back to the original Waveform poster and understand that it is the harmonic “signature” of each waveform that gives it the sonic characteristics that we used to describe each one.
The general dictionary definition of a filter is a device that when things pass through it, the device may hold back, lessen or remove some of what passes through it. In synthesis a filter is used to reshape the harmonic content from the oscillator-generated waveform. The above poster describes three of the most common types of filters from analog synthesis, but many more have been developed which have different characteristics.Different brands of synthesizers have their own filter designs that have a special sound, and many of those classic designs are much sought-after and emulated in modern digital and software synthesizers.
Learn about synthesizers via Ableton’s interactive website. Play with a synth in your browser and learn to use the various parts of a synth to make your own sounds.
Learn about synthesizers via Ableton’s interactive website. Play with a synth in your browser and learn to use the various parts of a synth to make your own sounds.
The poster says it straight up – an amp increases and decreases volume of the sound that is output by the oscillator. If the sound only stayed at a single level as determined by the amp level sounds would be pretty boring. Thankfully we have many ways to vary that sound output, via envelopes, LFOs, step-sequencers and more. Read on…
Learn about synthesizers via Ableton’s interactive website. Play with a synth in your browser and learn to use the various parts of a synth to make your own sounds.
An envelope (originally called a contour generator by Bob Moog!) is a building block of a synthesizer that changes the level of something over time. This is needed to recreate the complex characteristics of different sounds. The three main aspects of a sound that are usually shaped in this way are pitch (oscillator frequency), timbre (filter cutoff) and volume (amp level). Just describing the volume characteristics of a sound, some instruments keep sustaining (like a pipe organ), others decay in volume over time (a plucked string of a guitar, or a struck piano note). In modern synthesizers, and in modular synths an envelope can usually be routed to most any parameter to change its value over time. The poster describes what is called an ADSR envelope, but there are many types, some with many more steps able to be defined, and on the flip side some are simpler, with only Attack and Release stages.
Learn about synthesizers via Ableton’s interactive website. Play with a synth in your browser and learn to use the various parts of a synth to make your own sounds.
An LFO is another type of oscillator that is dedicated for use to modulate, or affect another parameter of the sound in a cyclic fashion (meaning it keeps repeating).So it seems related to the function of envelopes, but it behaves differently in the sense that you can’t shape it as finitely. Yet it is easier to use for simple repeatable things like vibrato (pitch modulation), tremolo (amp level modulation), and panning (changing the amp output from left to right in a stereo field).
How can we use MIDI to interact with these parameters?
The most common use of MIDI to affect these parameters is to map, or assign a physical controller on your keyboard or control surface to directly control a given parameter. We do this when we don’t have the instrument right in front of us (it may be a rack-mount device, or a soft-synth), or it doesn’t have many knobs/slider/controls on the front panel.You would use CC numbers (Control Change) and match up the controller object (slider, encoder, whatever) to the destination parameter you wish to control.
Then when you move the controller it sends a steady stream of values (128 to be exact) to move/change the destination. A device may have those CC numbers hard set, or they can be freely assigned. Most soft synths have a “learn” function, where the synth “listens” or waits to receive an incoming MIDI message and then sets is automatically, so you don’t even need to know what CC number is being used.
Some synths use what are called RPN (Registered Parameter Numbers) and NRPN (Non-Registered Parameter Numbers) to control parameters. While more complicated to set up, these types of message offer finer resolution than CCs (16,384 steps), but do the same thing. Soon there will be MIDI 2.0 which brings 32 bit resolution or 2,147,483,647 steps. Yes, that number is correct!
From a performance standpoint, a cool benefit of using MIDI to control a parameter is you can choose to have a different type of controller interact with the given parameter than your hardware device offers. Some people like to use a ribbon to do pitch bends rather than a wheel. Or to sweep the cutoff of a filter using an X/Y pad rather than a knob. Or route keyboard after-touch to bring in vibrato or tremolo rather than a Mod Wheel (OK, this one went beyond using CCs but you get the picture).
Another nice way to use MIDI is to assign sliders or knobs to an ADSR envelope in a product that doesn’t already have dedicated knobs to control the stages. So now you can easily soften, or slow up the attack on a sound (or speed it up), lengthen or tighten up the release (what happens when you take your finger off the key).
Using MIDI really becomes an aid when I am recording. If were to record only audio, as I play a synth I need to get all of my interactions with the sound perfect during the performance. My pitch bends, my choices of when to add vibrato and how much to add, and any other interactions I want to make with the sound. I can’t fix them later, as they are forever frozen in the audio I recorded. If I capture my performance using MIDI, each of those aspects are recorded as different types of MIDI messages/data, and I can then go back in and adjust them later. Too much vibrato on that one note? Go into event edit and find the stream of MIDI CC#1 messages and adjust it to taste. Even better, I can record my performance and not worry about other gestures/manipulation I might want to make, and then go back and overdub, or add them in later. So I can manipulate the sound and performance in ways that would be impossible to do in real-time. When I get the performance shaped exactly as I want it, I can then bounce the MIDI track to audio and I’m done. Thank you MIDI!
by Jerry Kovarsky, Musician and Author
A Brief History of the Minimoog Part I
Follow the life of the Minimoog Synthesizer from its inception through its prolific contributions to poplular music throughout the last 4 decades. In this first installment documenting the journey of the Minimoog synth through the 1970’s, we explore the musicians and the people that were instrumental in bringing the instrument to prominence. We also sit with one of Moog Music’s earliest engineers, Bill Hemsath, who recalls the process of the Minimoog’s birth and sheds some light on what sets the Moog synthesizer apart from other analog synths.
by Moog Music
A Brief History of the MiniMoog Part II
Chronicling the influential artists who used the Minimoog Model D to explore new genres and discover the sounds of tomorrow.
Since 1979, we’ve helped music makers all across the world build their dreams. We are a team of gear heads who are committed to doing the right thing for our customers.
We are musicians, engineers, producers, Julliard grads, Grammy winners, mothers, fathers, sons and daughters. We are diverse in our backgrounds and beliefs, but we’re all bound by the same goal- Do the right thing, for the customer.
Sweetwater offers customers a free 2-year warranty on nearly everything we sell, free shipping, 24/7 technical support and the dedicated support of our Sales Engineers. Visit us at Sweetwater.com, or give us a call at 800.222.4700 to see how we can help you achieve your creative goals.
Sweetwater Resources
Sweetwater MIDI Interface Buying Guide
How to Choose a MIDI Interface
When MIDI (Musical Instrument Digital Interface) was developed over 30 years ago, it resulted in a flood of music technology. Software DAWs have long replaced the hardware sequencers of the twentieth century, bringing an ever-increasing demand for effective ways to get MIDI in and out of computers.
MIDI keyboard controllers have become an important part of the music-making process for contemporary musicians and producers due to the increasing use of virtual instruments onstage and in the studio.
Knowing the ranges that instruments and voices occupy in the frequency spectrum is essential for any mixing engineer. Sweetwater has put together a Music Instrument Frequency Cheatsheet, listing common sources and their “magic frequencies” — boost/cut points that will produce pleasing results. Just remember to trust your own ears!
You can download the PDF of this chart by clicking here and then print it out.
Since its launch in 1997, Sweetwater’s Word for the Day feature has presented nearly 4,900 music and audio technology terms. Our definitions can help you cut through industry jargon, so you can understand what’s going on.
Moog Music is the leading producer of analog synthesizers in the world. The employee-owned company and its customers carry on the legacy of its founder, electronic musical instrument pioneer, Dr. Bob Moog. All of Moog’s instruments are hand built in its factory on the edge of downtown Asheville, NC
Moog Subsequent 25
Subsequent 25
Subsequent 25 is a 2-note paraphonic analog synthesizer that melds the hands-on analog soul of classic Moog instruments with the convenience and workflow of a modern sound-design machine. Moog’s most compact keyboard synthesizer, the Subsequent 25 delivers all of the rich sonic density that Moog synthesizers are known for.
Moog One® is the ultimate Moog synthesizer – a tri-timbral, polyphonic, analog dream-synth designed to inspire imagination, stimulate creativity, and unlock portals to vast new realms of sonic potential.
The Moog Factory in Asheville, NC has resumed production of the highly sought-after Moog 16 Channel Vocoder, an instrument which continuously analyzes the timbral characteristics of one sound (Program) and impresses these timbral characteristics upon a second signal (Carrier).
The Moog Factory in Asheville, NC has resumed production of the highly sought-after Moog 16 Channel Vocoder, an instrument which continuously analyzes the timbral characteristics of one sound (Program) and impresses these timbral characteristics upon a second signal (Carrier). Originally introduced in 1978, and famously heard on Giorgio Moroder’s E=MC2, this model has been used to transmute vocals, transform synthesizers, and electronically encode sound for over 40 years.
Melodics is modern learning for modern instruments
Melodics is modern learning for modern instruments, supporting MIDI Keyboards, Pad Controllers, and electronic drum kits. It’s structured learning for solid progress. Melodics takes the “but where do I start?” out of learning music. Start with a genre you love, or a technique you want to master. Whatever your skill level, there’s something there. Then take a course – Melodics courses take you on a journey, teaching you everything you want to know about a genre or concept.
by Melodics
Founder and CEO Sam Gribben
Melodics was founded by Sam Gribben, the former CEO of Serato and one of the people responsible for the digital DJ revolution and controllerism. So it’s not surprising that Melodics started with finger drumming on pad controllers.
Melodics hardware partners
It’s also not surprising that Sam took a page out of the Serato playbook and worked with well established hardware companies to create value add bundles with Melodics™. Here is a list of some of the companies that Melodics™ works with.
Because of the relationships he built up in ten years at Serato, Melodics has a stellar collection of artists that contribute lessons and content for the Melodics™ platform. This is just a small example the Melodics artist roster.
Melodics™ started with training for Pad Controllers like Ableton Push and Native Instruments Maschine. They have guides on techniques and correct posture. Long story short, they treat these new controllers as legitimate musical instruments that you need to practice and learn to play exactly the same way you would with a traditional instrument like a cello or a clarinet.
Melodics for Electronic Drums
Melodics™ is a perfect practice partner for someone with electronic drums.
Melodics™ for Keyboards
Melodics™ has a unique interface for keyboards that shows you what notes are coming next.
Melodics and MIDI
Melodics™ uses MIDI for all of it’s core functionality. SysEx is used to identify what device is connected and automatically configure the hardware controls. The lessons are MIDI based so Melodics™ can look at your performance and compare it to the notes in the MIDI file. So Melodics™ can determine if you played the right note and whether you played early or late and provide an ongoing report on your musical progress.
MIDI underpins everything we do, from the lesson creation process, to how we play back the lessons and display feedback, to how we interact with the instruments. Under the hood, Melodics is a midi sampler. We take the input from what the student is playing, compare that to the midi in the lesson we created, and show the student how they are doing compared to a perfect performance.
by Melodics
Get started for free!
You can download and start learning with Melodics at no charge.
The KMI K-Board Pro 4 started off as Kickstarter campaign in 2016 and quickly got to its funding goal of $50.000. What sets the Pro 4 apart from other controllers is KMI’s patented Smart Sensor Fabric technology which is a unique and proprietary conductive material that changes resistance as it is compressed.
KMI’s patented Smart Sensor Fabric technology
Expressive
KBP4 has Smart Fabric Sensors under each key bringing five dimensions of expressivity to your playing
Playable
KBP4 is configured like a traditional keyboard, giving you a familiar playing surface so you can start expressing yourself immediately.
Programmable
The KBP4 Editor Software works with Mac, Windows, or in a web browser to fully customize every element of the KBP4 playing experience
Every K-Board Pro 4 ships with a free license for Bitwig Studio 8-Track
Bitwig Studio 8-Track, the trim and effective digital audio workstation to start producing, performing, and designing sounds like a pro. 8-Track includes a large selection of Bitwig devices for use on up to eight project tracks with audio or MIDI. Plug in your controller, record your instrument, produce simple arrangements, design new sounds, or just jam.
Bitwig Studio 8-Track is the sketch pad for your musical ideas featuring the acclaimed workflow of Bitwig Studio.
Bitwig Studio 8-Track is available exclusively through bundles with selected partners.
K-Board Pro 4 , BitWig and MPE Expressions
KMI put together a tutorial to show how to setup the K-Board with Bitwig to take advantage of MPE’s advance MIDI expression capabilities,
Ólafur Arnalds didn’t start out playing keyboards. He started out as a drummer in hard rock bands. He is not alone. Yoshiki from the legendary Japanese hard rock band X Japan comes to mind. Many people forget that the piano is classified as a percussion instrument along with marimbas and vibraphones.
He has unique approach to music that combines technology and a traditional almost classical approach to composition. He also is one of the few people still using the MOOG Piano bar, a product developed by Bob Moog and Don Buchla (now discontinued) to turn any piano into a MIDI device.
Photo: Richard Ecclestone
What’s behind bleep bloop pianos
In many interviews, Ólafur says that his acoustic pianos bleep and bloop.
In these two Youtube video, he explains how MIDI technology is a core part of his creative process. What is is interesting is how organic and emotional the resulting music is. The technology nevers get in the way of the art and only compliments it.
This video explains how the three acoustic pianos are connected by MIDI.
I am in constant search of new ways to approach art with technology, interaction and creativity.
by Halldór Eldjárn
Halldór Eldjárn is another Icelandic artist who worked on the All Strings Attached project and developed some robotic MIDI instruments for the project.
Ólafur Arnalds on NPR’s Tiny Desk Concerts
To see a complete performance of this unique use of MIDI processing, listen to this performance on NPR Music Tiny Desk Concerts.
How to Bleep (and Bloop) yourself
Arnalds has released a library of sounds for Spitfire Audio recorded at his studio on his ‘felted’ grand piano along with added content in the Composers Toolkit.
Recently MIDI Manufacturer Association member Blokas released the Midihub, a MIDI router and processor. In our article on the MIDIhub, Loopop explains how to use the Midihub to create some Olafur Arnalds inspired MIDI effects of your own.
The SHARC® Audio Module is an expandable hardware/software platform enabling project prototyping, development and deployment of audio applications including effects processors; multi-channel audio systems; MIDI synthesizers/controllers, and many other DSP/MIDI-based audio projects.
The centerpiece of the SHARC Audio Module is Analog Devices’ high-performance SHARC ADSP-SC589. Combining two 450 MHz floating point DSP cores, a 450MHz ARM® Cortex®-A5 core and an FFT/IFFT accelerator with a massive amount of on-board I/O, the ADSP-SC589 is a remarkable engine for audio processing.
This development platform is designed for the experienced programmer and is supported with an extensive wiki that includes a bare metal, light-weight C / C++ framework designed for efficient audio signal processing with lots of example code and numerous tutorials and videos. These tutorials include audio processing basics, effects creation and a simple MIDI synthesizer.
In addition, the SHARC Audio Module supports the MicroPython programming language and Faust, a functional programming language, specifically designed for real-time audio signal processing and synthesis.
The SHARC Audio Module from Analog Devicescomes complete with a license-free Eclipse development environment (CCES) and a free in-circuit emulator. Also available is the Audio Project Fin – a must-have add-on board for serious MIDI developers with 5-pin MIDI Din, ¼ balanced audio, control pots, switches and a prototyping area.The best news is that both boards can be had for less than $300 total!
British mega band MUSE is currently on tour promoting their latest album Simulation Theory performing in sold out stadiums all over the world. Each night frontman and guitarist Matt Bellamy brings out a one of a kind guitar with a special history to play the song “The Dark Side.” While Bellamy is happy with the result, reporting that “the guitar works great!” the story of how this guitar was conceived and built is just a few short weeks is very interesting.
Matt Bellamy, being the perfectionist that he is, wants the sounds he created in the studio on stage as much as possible. One essential part of his sound is the Arturia Prophet V synthesizer. Being a user of Fishman’s TriplePlay MIDI guitar pickup & controller, both on stage and in the studio, he wanted to continue to use that to play the Arturia synth live, but without distance, range, cables and a computer getting in the way of his stage performance.
When Matt told me he absolutely wanted to use the Prophet V softsynth live on tour but still be able to move around the stage without any restrictions, I knew we had to find a new kind of solution that would take the computer out of the picture.
by Muse guitar tech Chris Whitemyer
Chris Whitemyer was aware of Swedish music tech company MIND Music Labs and how their ELK MusicOS could run existing plugins and instruments on hardware. Thinking MIND might be the missing piece of the puzzle he approached them at the 2019 NAMM Show. Together with Fishman and Arturia, a first meeting was held in the MIND Music Labs booth on the show floor. That meeting, which took place just a few weeks before the start of Muse’s 2019 World Tour, kicked off several hectic weeks resulting in the three companies producing a new kind of guitar just in time for the tour’s first date in Houston, TX.
Going to that first meeting at NAMM I didn’t know what to expect, but as soon as we plugged in the guitar with our TriplePlay system in the Powered by ELK audio interface board, it was pretty clear that the Fishman and ELK systems would be compatible.”
What was clear after the first meeting was that the reliability of the Fishman TriplePlay MIDI Guitar Controller in combination with ELKs ability to run existing plugins inside the guitar could open up a new world for performers like Matt Bellamy. And with the tour just weeks away, a plan was hatched to get the system finalized and ready for use in the most demanding of conditions – a world tour of arenas and stadiums.
Only days after the closing of the NAMM Show, MIND Music Labs CTO Stefano Zambon flew to Fishman’s Andover, MA headquarters to figure out how to get a powered by ELK audio board inside a guitar, that not only plays well enough to satisfy a world class performer, but could also control the Arturia Prophet V at extremely low latency. In short, redefine the state of the art for synth guitars.
Getting three different companies to join forces on a special project like this does not happen very often, so this was truly special. To go from a first meeting at NAMM to a functioning system in just weeks was a mind-blowing achievement. It required the special expertise and focused efforts of all three companies to pull it off – I can still hardly believe we did.
To see one of our V Collection classic products like the Prophet V on Stage with Muse is very exciting. The fact that it is that same plugin running in the guitar as you use in the studio really makes all the difference. I mean, Matt Bellamy even uses the same preset in the studio!”
by Arturia CEO Frédéric Brun
On February 22nd, just 4 weeks after the first initial meeting at NAMM, MUSE went on stage in Houston in front of a jam-packed Toyota Center. Seven songs into the show Chris Whitemyer handed Matt Bellamy the new guitar for the song “The Dark Side”
When all the guys got together to build this, we didn’t tell Matt that a new guitar was going to be built or maybe not built. I just gave it to him for the first show and told him he could walk as far as he wanted on stage. He just said ‘Oh, Cool!'”
I had no doubt in my mind it would work and it performed flawlessly. When I first got the guitar one week before the first show I tested it very thoroughly, leaving it on for four hours, turning it off and on fifty or more times, and jumping up and down with it and bouncing it off a mattress. It passed all the tests. The guitar is rock solid! Matt and I couldn’t be happier. It does everything I hoped it would and it’s on stage every night.
by Muse guitar tech Chris Whitemyer
If you want to see this unique guitar in action it will be on MUSE’s Simulation Theory World Tour in the U.S. through May, then in Europe all summer and in South and Central America this fall.
You may not know it, but a lot of the software you use may be made by the same system, JUCE. JUCE is used for the development of desktop and mobile applications.
The aim of JUCE is to allow software to be written such that the same code will can run identically on Windows, Mac OS X and Linux platforms. It supports various development environments and compilers.
Juce not only teaches you how to build audio apps and synths, but also how to control them with MIDI.
Dave Zicarelli from Cycling 74′ and Brett Porter from Art and Logic use Juce
Why does that matter? Both David and Brett are in the MIDI 2.0 prototyping working group. Because a lot of the MIDI 2.0 prototyping work that they are doing is being done in Juce, it will support various development environments and compilers. Tools like Juce weren’t available back in 1982!
Melodics™ is a desktop app that teaches you to play MIDI keyboards, pad controllers, and drums.
Melodics works with any MIDI capable keyboard, pad controller, or drum kit. It has plug & play support for the most popular devices on the planet and custom remapping for everything else.
It’s free to download, and comes with 60 free lessons to get you started.
With acoustic instruments, playing in time comes naturally. You can jump in when the time’s right, and everyone keeps their flow. Playing together with electronic instruments hasn’t always been so easy. Now Link makes it effortless.
Link is a technology that keeps devices in time over a local network, so you can forget the hassle of setting up and focus on playing music. Link is now part of Live, and also comes as a built-in feature of other software and hardware for music making.
Join the session
Hop on to the same network and jam with others using multiple devices running Link-enabled software. While others play, anyone can start and stop their part; or start and stop multiple Link-running applications at the same time. And anyone can adjust the tempo and the rest will follow. No MIDI cables, no installation, just free-flowing sync that works.
With Live and beyond
People make music using a range of instruments, so Link helps you play together using a range of devices. A growing number of music applications have Link built in, which means anyone on the same network can play them in time with Live. You can even use Link without Live in your setup: play Link-enabled software in time using multiple devices, or multiple applications on the same device.
Push is an instrument that puts everything you need to make music in one place—at your fingertips
Making music is hard. To stay in the flow, you need to be able to capture your ideas quickly, and you need technology to stay out of the way. Computers make it possible for one person to create whole worlds of sound. But instruments are where inspiration comes from. Push gives you the best of everything. It’s a powerful, expressive instrument that gives you hands-on control of an unlimited palette of sounds, without needing to look at a computer.
Spend less time with the computer when composing ideas, editing MIDI or shaping and mixing sounds. Browse, preview and load samples, then slice and play them on 64 responsive pads. Play and program beats, melodies and harmonies. See everything you do directly on Push’s multicolor display. Integration with Live is as tight as possible, which means what you do on Push is like putting your hands directly on the software.
Ableton Push 2 Key Features:
Hardware instrument for hands-on playability with Ableton Live
Simultaneously sequence notes and play them in from the same pad layout
Creative sampling workflows: slice, play and manipulate samples from Push
Navigate and refine your music in context directly with advanced visualization on the Push multicolor display
64 velocity- and pressure-sensitive backlit pads
8 touch-sensitive encoders for controlling mixer, devices and instruments, and Live browser navigation
Launch clips from the pads for jamming, live performance or arrangement recording
Scales mode offers a unique approach to playing notes and chords
Includes Beat Tools—a toolkit for beatmakers with more than 150 drum kits and instruments, 180 audio loops and much more
Includes Live 10 Intro for new users
Push gives you the best of both worlds for making music: inspiring hardware for hands-on control at the beginning, and full-featured music creation software for fine-tuning the details at the end.
Push is the music making instrument that perfectly integrates with Ableton Live. Make a song from scratch with hands on control of melody, beats and structure.
NKS is an integration technology developed by Native Instruments
NKS brings all your software instruments, effects, loops and samples, into one intuitive workflow – creating seamless integration between NI and other leading developers. It gives you streamlined browsing, consistent tagging, instant sound previews, pre-mapped parameters, Smart Play features, and more. NKS also connects all your favorite tools to our KOMPLETE KONTROL keyboards and software, MASCHINE, and third-party controllers.So If you see the NKS logo, you know what to expect: An intuitive and comfortable workflow that makes it easy to bring your sound to life.
by Native Instruments
BROWSE BETTER AND FASTER THAN EVER
Hear instant audio previews as you scroll through thousands of patches, from hundreds of instruments, from over 75 developers.
EVERYTHING IS PRE-MAPPED
Start playing and tweaking instantly – just load an instrument or an effect and go. Each parameter is pre-mapped to the hardware, with the mappings designed by the developers themselves.
PLAY COMPLEX MUSIC EASILY
The KOMPLETE KONTROL software lets you play intricate chord progressions and arpeggios, even without musical training, with single finger control. NKS helps bring out the music in you.
DEEPER CONTROL
The Light Guide on the KOMPLETE KONTROL S-Series keyboards lets you see – and control – a range of deeper settings including articulations, keyswitches, and more.
The Sonogenic is not just a MIDI controller, it has built-in sounds, speakers and USB Audio/MIDI connectivity
The SHS-500 has everything you need to start playing right away all built-in to the compact “keytar” form factor.
Sonogenic Red and Black
Sonogenic Controls
The Sonogenic has both Audio (stereo 44.1kHz) and MIDI USB capabilities and lots of connectivity
The SHS-500 Sonogenic connectivity
The SHS-500 features Bluetooth MIDI for wireless iOS connectivity
The Chord Tracker App
Chord Tracker is an app that analyzes the songs in your music library and nearly instantaneously shows you the musical structure of each in the form of an easy-to-understand chord chart like this:
Chord Tracker
Sample Tank 3
Sonogenic SHS500 Features:
37-note keytar with Bluetooth MIDI for wireless iOS connectvity
JAM mode lets you focus on playing rhythms while the Sonogenic takes care of playing the correct notes of songs
37 mini keys that play like a full-sized keyboard
Modulation wheel lets you control the amount of modulation effect on your sound
The USB-to-Host port connects to a wide variety of educational, creative, and entertaining musical applications on your computer or mobile device
3.5mm AUX input for connecting a portable music player, iOS device, mixer, or computer for audio playback via internal speakers
¼” AUX Line output jacks for connecting to an external amp or PA system without disabling the onboard speakers
Included AC adapter, MIDI breakout cable, neck strap
Though Roland co-invented the Musical Instrument Digital Interface (MIDI) well over three decades ago, it’s still an integral part of new products and is as useful to musicians as ever. A prime example is the tiny but mighty Roland VT-4 Voice Transformer, a portable effects box for the instrument inside us all—the human voice.
Today’s musical styles increasingly use unusual vocal sounds with heavy processing, making them stand out and grab the listener’s attention. With the Roland VT-4, you have a wealth of modern and retro vocal effects at your fingertips, with no need for a complicated setup using a computer and plug-ins. The VT-4 has everything from delay and reverb to mind-bending formant and vocoding effects. Better still, the Roland VT-4’s performance-oriented interface lets you ride the controls while you sing to constantly alter the sound to suit the track and enhance the vibe of your performance.
But what if you need more control over your pitch or the voicings of your vocal harmonies? That’s where MIDI comes in.
While the Roland VT-4 works great on its own and can harmonize and vocode without any input other than your voice, plugging a MIDI keyboard opens even more expressive possibilities. Through MIDI you can control the Auto-Pitch, harmony, and vocoder engines in real time with the notes you play from a connected controller. You can hard-tune your voice to specific notes as you sing or create instant MIDI-controlled melodies and multi-part harmonies with voicings that follow your chords, and it is SO simple to get set up!
Supported by MIDI, the Roland VT-4 Voice Transformer brings real time vocal processing (including vocoding!) into the 21st Century.
One of the biggest recen developments in MIDI is MIDI Polyphonic Expression (MPE). MPE is a method of using MIDI which enables multidimensional controllers to control multiple parameters of every note within MPE-compatible software.
It has never been as easy to stay “in the box” as it is now. There are lots of software virtual instruments out there; some emulate hardware instruments, and others offer completely new sounds. That said, there’s something special about performing on a synthesizer or MIDI instrument with its own sound engine that’s difficult, if not impossible, to capture in software. And just as software instruments keep getting better, hardware MIDI instruments have never been better or more affordable. Here are ways you can record your MIDI instrument, depending on the features.
Recording a MIDI Instrument with USB Audio and MIDI
If your MIDI instrument has a USB port that can both send and receive MIDI and audio data, you’re in luck! Recording this device will be a breeze. First, connect the USB port on your instrument to a USB port on your computer. Then make sure that your DAW sees the USB ports of your instrument as both audio and MIDI devices. You’ll want to set up an instrument track to record and play back the MIDI data from your instrument, and to accept the audio input coming from the USB audio connection as well. This allows for the most flexible use of your MIDI instrument possible — you can record it, edit the recorded MIDI notes, and then hear the resulting edited audio coming back from your instrument.
Recording a MIDI Instrument with USB MIDI Only
Many MIDI instruments that have USB ports will only send and receive MIDI data over USB. This isn’t quite as convenient as if your instrument could send both audio and MIDI over USB, but it’s still easy to work with. First, connect the USB port of your instrument to a USB port on your computer, and connect the audio outputs of your instrument to audio inputs on your audio interface. Next, set up a MIDI track in your DAW to record and play back the MIDI data from the USB connection of your instrument. Then set up an audio track in your DAW to record the audio inputs on your interface that you’ve connected your instrument to. Now your MIDI track will record and then play back MIDI to your instrument over USB, and your audio track will record the audio output from your instrument. Although connecting everything is a bit more complicated with this method, you’ll still be able to record, edit the recorded MIDI notes, and then hear the resulting edited audio coming back from your instrument.
Recording a MIDI Instrument with No USB Ports
Some MIDI instruments, especially older ones, don’t have any USB ports at all. They will usually use the original 5-pin DIN MIDI ports. This requires a little extra gear but is fundamentally the same as recording a MIDI instrument with USB MIDI only. The big difference is that you’ll need a separate USB MIDI interface to send and receive MIDI between your instrument and computer. Some audio interfaces may come with a built-in 5-pin DIN MIDI interface; otherwise, you can purchase a dedicated one. You can buy inexpensive MIDI interfaces with a single MIDI in and MIDI out port, such as the M-Audio MIDISport 2 x 2, or fully featured rackmounted MIDI interfaces, such as the MOTU MIDI Express series with up to 8 x 8 MIDI ports, depending on how many MIDI devices without USB you have. Once you have your MIDI devices connected to your computer via a USB MIDI interface, the rest of the process is identical to the prior method: recording a MIDI instrument with USB MIDI only.
It might take a little more planning to record a hardware MIDI instrument, but the expression potential and the often unbeatable sound quality make it worth it. Don’t let the fact that the sounds aren’t inside your computer scare you off; recording MIDI instruments is easy!
BLOCKS is a modular music making system made up of 5 components
Seaboard Block Super Powered Keyboard
Multi-award-winning Seaboard interface
5D Touch technology
24 keywave, two-octave playing surface
Hundreds of free sounds
Suite of music making software for desktop and mobile
Wireless and portable for making music on the go
Connects to other Blocks
Lightpad Block Expressive Musical Touchpad
Touch responsive soft silicon playing surface
LED illumination reconfigures Lightpad M for different notes and scales
Adaptable surface can become a drum pad, fader bank, effects launcher and more
Hundreds of free sounds
Suite of music making software for desktop and mobile
Wireless and portable for making music on the go
Connects to other Blocks
Perform with the Live Block
The Live Block is for performance. The buttons let you switch scales and octaves, trigger chords and arpeggios, and sustain notes in real time.
Touch Block-Add Expression Faster
Touch Block helps you adjust the expressive behavior of your Seaboard Block and Lightpad Block. Turn up or turn down the responsiveness of the surface to the Strike, Glide, Slide, Press, and Lift dimensions of touch. Maximize the depth of expression available through pressure, or minimize the pitch-bend effect of sideways movements. Customize your control of any sound in real time and on the fly.
Loop Block-Produce Faster
Loop Block helps you produce a track faster. Record loops and play them back. Set your tempo, and quantize your loops so they’re always in time.
ROLI Dashboard
Customize BLOCKS and the Seaboard RISE for your workflow
Blocks become open-ended MIDI control surfaces through ROLI Dashboard. Customize the LED-illuminated Lightpad Block by loading different apps, including a note grid, a bank of faders and more. Use Control Blocks as CC controllers for your favorite DAW.
MIDI Controller/Audio Interface for mobile musician
The iRIg Keys I/O comes in two version a a 25-key MIDI controller version and a 49-key MIDI controller version. Both feature built-in audio interfaces with 24-bit/96kHz sound quality, a Neutrik combo input, and phantom power and eight touch-sensitive RGB LED backlit drum pads.
iRIG I/O 25
iRIG I/O 49
Complete suite of music production software included
The iRig Keys I/O 25 comes with all the software you need to start creating music. Ableton Live Lite is the perfect DAW to get started with and IK Multimedia adds RackS Deluxe with 10 i mixing and mastering tools and SampleTank 3 with 4,000 rinstruments, 2,500 rhythm loops, and 2,000 MIDI files. If you are mobile musician, SampleTank iOS for iPad and iPhone is a full-featured mobile sound and groove production studio.
Ableton Live lite
Sample Tank 3
T-RackS Deluxe
IK Multimedia iRig Keys I/O 49 Features:
MIDI controller with 49 full-size, velocity-sensitive keys
8 touch-sensitive RGB LED backlit drum pads for beat creation
Touch-sensitive sliders and buttons plus touch-sensitive rotary controllers for controlling soft synths and other apps
Built-in USB audio interface features excellent 24-bit/96kHz sound quality
Neutrik combo input with phantom power handles nearly any microphone or instrument
Stereo line output and headphone jack provides ample monitoring options
This panel discussion will also include live and video performances from the participants.
Panelists: Jordan Rudess, Pat Scandalis, Alon Ilsar, Keith Groover, Qianqian Jin, Nathan Asman
The Glide, GeoShred and Airsticks win Guthman New Instrument Competition
On March 9th at the Georgia Tech Center for Music Technology, three judges with audience input selected the three winners of the 2019 Guthman New Instrument contest .
All three judges are people who are heavily involved with MIDI.
Pamela Z Composer, Performer, Media Artist Roger Linn Technical Grammy Award Winner Ge Wang Associate Professor, Stanford University
The Glide was conceived, designed, and coded by Keith Groover, a musician, music educator, and inventor living in South Carolina. There are two controllers, one for each hand, and each controller has three accelerometers (for the X, Y, and Z axes.) It is primarily designed to be a MIDI controller broadcasting over bluetooth, which means that you pair it with a phone, tablet, or computer and then play through a synthesizer app. Here is a video on how it works.
Jordan Rudess is no stranger to MIDI.org. We have done exclusive interviews with him. HIs videos of playing a number of MPE instruments are featured in our articles on MPE. Now his GeoShred app has won 2nd place in the 2019 Guthman New instrument Competition. GeoShred is highly expressive when controlling, and being controlled by, instruments that use the “MPE” MIDI specification (MIDI Polyphonic Expression). It’s both a powerful synth and a formidable iPad based MIDI/MPE controller!
The AirSticks combine the physicality of drumming with the unlimited possibilities of computer music, taking the practice of real-time electronic music to a new realm.
The AirSticks were developed by drummer/ electronic producer Alon Ilsar and computer programmer/ composer Mark Havryliv. Airsticks transform off-the-shelf gaming controllers into a unique musical instrument,
The QJin was developed by Qianqian Jin, a student in the Technology and Applied Composition (TAC) of San Francisco Conservatory of Music The Qijin is a customized MIDI controller for a Guzheng (a Chinese classical zither). It is not only a MIDI controller , but it has a built-in amplification system to augment its capacity for live performance and sound design. A built-in arduino board that supports MIDI allows the performer to connect to any MIDI compatible music software.
The Kaurios gets its name from the amazingly unique wood that it is made out of. Kauri is the oldest wood available in the world and has been buried underground in New Zealand for about 50,000 years. So Nathan Asman’s project marries ancient wood with state of the art wireless BTLE MIDI technology.
This custom-built instrument is called Curve, and is named after the shape and contour of the interface itself. I wanted to create something that had a myriad of different sensors and ways of controlling different musical parameters, while also mai
The tagline for the Margaret Guthman New Instrument Competition is “the future of music” and all three winners of the 2019 competition were MIDI controllers. So the future of music is MIDI. We couldn’t agree more.
Controllerism May 4, 2019 at 3 PM Pacific Time A panel discussion with the people who created the Controllerism movement about how MIDI influences the world of Digital DJs.
Laura Escudé, Sam Gribbens, Huston Singletary, Moldover, Kate Stone, Shawn Wasabi
Panelists
Laura Escudé
International music producer, DJ, controllerist, violinist and live show designer Laura Escudé aka Alluxe has been an important figure in some of the most revered concerts around the globe, DJing, programming and designing shows for the likes of Kanye West, Jay Z, Miguel, Charli XCX, Demi Lovato, Iggy Azalea, Yeah Yeah Yeahs, Herbie Hancock, Cat Power, Bon Iver, Drake, The Weeknd, Silversun Pickups, Garbage, Childish Gambino and M83. Escudé is a classically trained violinist, an Ableton Certified Trainer and is the CEO of Electronic Creatives, a team of some of the most talented and sought after programmers and controllerists in the business.
Sam Gribbens
Sam was the CEO of Serrato when the Controllism movement began. He then went on to found Melodics™. Having finished up at Serato after a decade at the helm, Sam was ready for something new. He’d worked with some of the biggest artists in the music world, and with the international companies who built the instruments & controllers they used. Along the way he noticed how important pad & cue point drumming was becoming in the overlapping worlds of DJing & production. Thus, an idea was born.
Huston Singletary
Sound designer, producer, film composer, product specialist, clinician, and programmer, Huston Singletary, has been affiliated with the best of the best in the sound design/synth world. Toontrack, Izotope, Synthogy, Native Instruments, Roland, Alesis, and Spectrasonics.
Moldover
History only notes a handful of artists who successfully pushed the limits – both with their music and the design of their musical instruments. What Bach was to the keyboard and Hendrix was to the guitar, Moldover is to the controller. Disillusioned with “press play DJs”, Moldover fans eagerly welcome electronic music’s return to virtuosity, improvisation, and emotional authenticity. Dig deeper into Moldover’s world and you’ll uncover a subversive cultural icon who is jolting new life into physical media with “Playable Packaging”, sparking beautiful collaborations with his custom “Jamboxes”, and drawing wave after wave of followers with an open-source approach to sharing his methods and madness.
Kate Stone
Dr. Kate Stone, founder of Novalia, works at the intersection of ordinary printing and electronics to make our current analogue world come alive through interaction. Novalia creates paper thin self-adhesive touch sensors from printed conductive ink and attached silicon microcontroller modules. Their control modules use Bluetooth MIDI connectivity. “Novalia’s technology adds touch, connectivity and data to surfaces around us. We play in the space between the physical and digital using beautiful, tactile printed touch sensors to connect people, places and objects. Touching our print either triggers sounds from its surface or sends information to the internet. From postcard to bus shelter size, our interactive print is often as thin as a piece of paper. Let’s blend science with design to create experiences indistinguishable from magic.”
Shawn Wasabi
Shawn Wasabi is an Artist/Producer/Visionary of Filipino decent from the city of Salinas, California. He first awed the Internet world with his release of “Marble Soda”, using the rare Midi Fighter 64, co-designed by Shawn. Using this one of a kind machine, Shawn reached 1 million views on Youtube within 48 hours of “Marble Soda” being uploaded.
On the heels of “Marble Soda” success he went on to release 7 more original songs amassing over 100 million Youtube in the span of 3 years. Shawn went on to create an original visual element that blends video games, animation and music together. With his visual brand, Shawn Wasabi’s has culminated a demand for his services as a studio music producer, which resulted in famed Songwriter Justin Tranter signing him to an exclusive publishing deal with Facet Music/Warner Chappell.
With K-Board Pro 4 we’ve taken the format of a traditional keyboard and updated it for the 21st Century. With our SmartFabric™ Sensors underneath each key you can tweak any synthesis parameter in real time by moving your fingers while you are playing. The MIDI MPE Standard is the future for expressive controllers and we have designed the K-Board Pro 4 to be the ultimate MPE Controller.
by Keith McMillen
Multidimensional Expression
The Keith McMillen Instruments K-Board Pro 4 is a 4-octave MIDI keyboard controller with multidimensional touch sensitivity in each key. K-Board Pro 4 supports MIDI Polyphonic Expression (MPE) that allows additional gestures individually on each key. You can wiggle your finger horizontally to generate MIDI CC commands, slide vertically to open up a filter, or apply pressure to control volume. For non-MPE synths, the K-Board Pro 4 provides fully featured polyphonic aftertouch. The data from each gesture is completely assignable and sent individually per note.
Keith McMillen Instruments K-Board Pro 4 Features:
Provides a level of expressiveness previously attainable only with acoustic instruments
Support for MPE (MIDI polyphonic expression) protocol
SmartFabric sensors underneath each key
Transmits attack and release velocity and continuous pressure, as well as horizontal and vertical position data
48 resilient silicone keys and no moving parts for superior durability
USB powered; class compliant
MacOS/Windows, iOS/Android compatibility
SmartFabric sensor technology
Under each key is Keith McMillen Instruments’ patented Smart Sensor Fabric technology which let you tweak any synthesis parameter in real time simply by moving your fingers while you are playing.
The K-Board Pro 4 is USB powered and class compliant to ensure compatibility with MacOS, Windows, iOS, and Android, as well as all MIDI-enabled hardware.
Editors in OSX, Windows and Web MIDI formats
Keith McMillan Instruments provides editors for OSX and Windows, but you can also edit and update your K Board Pro 4 directly online using Web MIDI.
After many years, Moog releases a polyphonic analog synth
The Moog One is a programmable, tri-timbral analog synth featuring an intuitive tactile interface that allows you to explore a vast sonic universe of classic Moog analog circuits that have been know for many years for their unrivaled punch and rich harmonics,
An advanced sound architecture comes in 16 voice and 8 voice versions
The 16 voice allows sixteen complete voices simultaneously and the 8 voice allows eight. Each voice features three state-of-the-art analog voltage-controlled oscillators (VCOs), two independent analog filters (a Variable State filter and the famous Moog Ladder Filter) that can be run in series or parallel, a dual-source variable analog noise generator, an analog mixer with external audio input, four LFOs, and three envelope generators.
You can split or layer three different timbres — each with its own sequencer, arpeggiator, and onboard effects library — across the premium 61-note Fatar keyboard with velocity and aftertouch.
Moog One Analog Synthesizer Features:
8- or 16-voice polyphony
3 VCOs per voice with waveshape mixing and OLED displays
Unison mode (up to 48 oscillators on the 16-voice instrument)
2 filters per voice with filter mixing (2 multimode State Variable filters that function as a single filter, and a classic lowpass/highpass Moog Ladder filter)
3 DAHDSR envelopes per voice with user-definable curves
3-part multitimbrality
Separate sequencer and arpeggiator per timbre
Chord memory
Dual-source noise generator with dedicated envelope
Mixer with external audio input
Ring modulation with selectable routing
Oscillator FM and hard sync with selectable routing
4 assignable LFOs
Premium 61-note Fatar TP-8S keybed with velocity and aftertouch
Assignable pressure-sensitive X/Y pad
Digital Effects (Synth and Master Bus)
Eventide reverbs
Selectable glide types
USB and DIN MIDI
Save, categorize, and recall tens of thousands of presets
Create Performance Sets that make up to 64 presets accessible at the push of a button
2 x ¼” stereo headphone outputs
2 pairs of assignable ¼” outputs (supports TRS and TS)
4 x ¼” hardware inserts (TRS)
1 x ¼” external audio input (line-level)
1 XLR + ¼” TRS combo external audio input with trim knob
9 assignable CV/GATE I/O (5-in/4-out)
USB drive support for system and preset backup
LAN port for future expansion
Amos Gaynes on the Moog One
Amos Gaynes works for Moog Music and he is also the chairman of the MIDI Manufacturers Association’s Technical Standards Board. Here he talks about the development of the Moog One,
UNO Drum marries analog sounds and digital control
The UNO Drum features six true analog voices — kick, snares, claps, and hi-hats — plus there are 54 PCM samples — toms, rims, ride, and cowbell — derived from IK’s popular SampleTank 4. Because the UNO has 11-voice polyphony you can even layer the analog and PCM sounds together.
The analog section was designed by Soundmachines who also collaborated with IK Multimedia on the UNO Synth.
IK Multimedia UNO Drum Features:
Drum machine with analog engine plus 54 PCM samples
6 analog voices designed by Soundmachines
54 PCM samples derived from SampleTank 4
Layer analog and PCM sounds together with 11-voice polyphony
Loads of sound-shaping tools, including tune, snap, and decay for every sound, and global drive and compression effects
12 touch-sensitive pads with dual velocity zones
4 dynamic encoders
Stutter, random, and roll effects for spicing things up
64-step sequencer with 8 parameter automations per step
Record by step or in real-time
Save and recall 100 patterns and 100 drum kits
Song mode chains up to 64 patterns together in any order
Integrates with your rig via USB, 2.5mm MIDI I/O, and audio pass-through
Runs off battery or USB bus power
Integrates in any Live, Studio, or Mobile Set-up
The UNO Drum features USB and traditional MIDI via 2.5mm jacks (the cables are included) so it’s easy to integrate with our Mac/PC, iOS device or traditional outboard MIDI gear.
UNO also offers Audio in with compression to daisy chain with other gear.
Fig. 1: The orange notes overlap the attacks of subsequent notes. The white notes are trimmed to avoid this.
Most bass lines are single notes, and because bassists lift fingers, mute strings, and pick, there’s going to be a space between notes. Go through your MIDI sequence note by note and make sure that no note extends over another note’s attack (Fig. 1). If two notes play together, you’ll hear a momentary note collision that doesn’t sound like a real bass. I’ll even increase the gap between notes slightly if the notes are far apart.
2. Squeeze every drop out of your track
Fig. 2: Studio One’s Transform tool makes it easy to compress values by raising the tool’s lower boundary.
Great bassists are known for their touch — the ability to play notes with consistent timing and dynamics. It can sometimes be harder to play keyboard notes consistently than bass strings, which brings us to MIDI velocity compression.
Audio compression can give more consistent levels, but it doesn’t give a more consistent touch; that has to happen at the source. Some recording software programs have either MIDI FX or editing commands to compress data by raising low-level notes and/or reducing high-level notes (Fig. 2). But if your program doesn’t have velocity compression, there’s an easy solution: add a constant to all velocity values for “MIDI limiting.”
For example, suppose the bass part’s softest note velocity is 70, and the highest is 110 — a difference of 40. Add 35 to all values, and now your softest velocity is 70+35=105, and your highest is 110+35=145, but velocity can’t go higher than 127 — so you have instant “MIDI limiting.” Now your highest-velocity note is 127, and there’s only a difference of 22 between the highest and lowest notes. If you want to go back to making sure the highest-level note is 110, then subtract 17 from all values. Your highest-level note is now at 110, but the lowest-level note is 88 — still a difference of 22 instead of 40.
This doesn’t necessarily preclude adding audio compression, but you’ll probably need to add less of it, and the sound will be more natural.
These kinds of techniques work, perhaps with slight modifications, with many software programs. For example, when editing MIDI dynamics, although Studio One’s Transform tool shown above gives very intuitive visual feedback, Cubase and Digital Performer have very flexible ways to control MIDI dynamics, and Ableton Live’s Velocity MIDI effect even lets you sculpt velocity curves.
3. If it’s a Synth Bass
It means you can probably modulate synth parameters with velocity. When creating sampled bass instruments, rather than go through the hassle of multi-sampling different velocities, I sample each individual note plucked strongly and then tie sample start time, level, and filter cutoff to note velocity to create the dynamics. Although the sound may arguably not be as realistic as something with four billion round-robin samples, I find this approach to be more expressive overall because any synth module changes tied to dynamics are continuous.
4. Slippin’ and Slidin’
Slides are an important bass technique — not just slides up or down a string, but over a semitone or more when transitioning between notes. For example, when going from A to C, you can extend the A MIDI note and use pitch bend to slide it up to C (remember to add a pitch bend of 0 after the note ends). Also, all my sampled bass instruments have sampled down and up/down slides for each string. Throw those in from time to time, and people swear it’s a real bass. Unless you’re emulating a fretless bass, you want a stepped, not continuous, slide to emulate sliding over frets, but you don’t want to re-trigger the note at each step. There are several ways to do this.
Fig. 3: Studio One’s Presence XT instrument has glide. Enable it, set a very short glide time, and add a very slight overlap between notes — the 1-measure slide shown here goes from C to G. The last note does not overlap with the G; this gap between notes allows the G note to re-trigger.
If the bass instrument has a legato mode, you can do a slide by adding notes at individual semitones to create the slide, and then using legato mode to avoid having the notes re-trigger. Legato mode does require an overlap between notes, but it can be very short.
Glide will also work under the same conditions, but you need to set the Glide time to minimum (Fig. 3). If your program doesn’t interpolate between pitch-bend messages (or you can turn off smoothing for the pitch-bend function), quantizing pitch-bend slide messages so they’re stepped is another solution, but this one doesn’t require entering extra notes. For example, with a virtual instrument’s pitch bend set to +/-12 semitones, quantizing the bend to 1/32 triplets will give exactly 12 steps in an octave-up slide that lasts one beat, while a 1/16 note triplet gives 12 steps in an octave-up slide that lasts two beats, or
Just draw a stepped pitch bend.
Then again, you might want to emulate a fretless bass and have continuous slides.
Fig. 4: Use these pitch-bend values to slide a precise number of semitones.
For precise slides, Figure 4 shows the amount of pitch-bend change per semitone when using a pitch-bend range of +/-12 semitones (recommended for bass to make these kinds of slides possible). For example, if an octave is a pitch-bend value of 8191 and you want to start a slide three semitones above the note where you want to land, start at a pitch-bend value of +2048 and end with a pitch-bend value of 0. If you want to step the part (this assumes you can turn off pitch-bend smoothing or enter precise values in an Event List), add equally spaced events at +1366, +683, and just before the final note, 0.
5. Mod Wheels Are Not for Vibrato
Dubstep people have figured this out — they eschew vibrato for tremolo or “filtrato.” With bass, I use the mod wheel for what I feel are more useful effects:
Roll off treble as the wheel rolls further away to emulate a traditional bass tone control
Mix in a sub-octave for an octave-divided bass sound
Alter tremolo depth to add pulsed tremolo sparingly
Increase drive to an amp sim to give more “growl”
Because you’ll likely be playing single notes for bass line, your other hand will be free to work the mod wheel and increase expressiveness even further — and that’s a good thing.
Yamaha has a number of mobile apps for their DTX electronic drums to make drumming more fun while helping you to get better!
DTXM12 Touch
The DTXM12 Touch app not only lets you edit the pads with a touchscreen interface but also adds new features that expand its functionality in live performance situations. When the DTX-MULTI 12 is connected to an iPad or iPhone via USB, drummers can now trigger song playback and backing tracks from their music library using the pads, and then mix the audio through the stereo auxiliary input! Additionally, the app includes a mixer for all the sounds of a kit, including up to four sounds per pad, and access to every parameter of the instrument. It also lets users quickly see what voices are assigned to the pads on the touchscreen.
DTX502 Touch
The DTX502 Touch app lets drummers take control of the DTX502 drum trigger module using their iOS’ touch-screen interface when connected via USB. Now it’s even easier to create custom user kits, layer and cross-fade two different sounds per pad, and program up to 30 click and tempo settings for instant recall. The app also serves as a conduit for downloading new kits in a wide range of styles from YamahaDTX.com. In addition, the app has a unique Hybrid Setup wizard that helps drummers calibrate custom trigger settings quickly for their DTX 502-series kit, or any combination of electronic pads and acoustic drum triggers!
DTX402
With the DTX402 touch app, the creative possibilities are nearly limitless. Fine tune your DTX402 series kit to precision. Change the sounds for any of the 10 built in kits or individual pads, set custom tunings, volume settings and more. Access the Trigger setup, Reverb and Pedal settings with a single touch, and adjust the virtual position of the open hi-hat. You can even set the volume for the on board “Voice Guidance” training system. The 402 touch app also has 10 built in play along songs, designed to make you a more well rounded, diversified drummer. Choose to play along with either the pre-recorded drums with those songs as a practice reference, or use the option to mute the pre-recorded drums and take on the show for yourself. The app has a big focus on education ,and offers 10 challenge mode practice exercises, covering a variety of important skills and topics every drummer should strive for.
Song Beats
Song Beats is an iPhone app that supports your drum performance by visualizing which drums to hit and when to hit them while playing along with your favorite songs. The app also allows you to easily create custom accompaniments for drums, putting your drumming at the center of the band. In addition, you can also use 10 built-in demo songs or any MIDI song that you’ve already purchased from Yamaha MusicSoft by using iTunes File Sharing. Register Song Beats with Yamaha, and your first song is free!
DTX700 Touch
DTX700 Touch app Allows you to easily and intuitively Customize your kit with quick access to editing and layering. Fine tune your sounds with The EQ and add filters with a simple touch and drag. Download free drum kits from YamahaDTX.com or back up a kit or the whole module with an iOS device.
NI has released their smallest, most portable controller ever!
Native Instruments Komplete Kontrol M32 Features:
Micro-size keyboard controller with 32 keys for all your virtual instruments and effects
Affordable entry point into the NI world
Synth-action, custom NI micro-keybed
Informative OLED display for at-a-glance navigation
8 touch-sensitive control knobs
2 touch strips for intuitive expression
4-directional push encoder for one-handed sound browsing and project navigation
Tag-based preset browsing via the Komplete Kontrol software lets you find sounds quickly and hear instant previews
Smart Play lets you stay in key with over 100 scales and modes, play chord progressions and arpeggios with single keys, or map any scale to white keys only
Pre-mapped control of Komplete instruments and effects, plus hundreds of Native Kontrol Standard (NKS) plug-ins from leading manufacturers via Komplete Kontrol software
Expand your library with loops and samples from Sounds.com
Full VSTi and VST FX support
Deep integration with Maschine software
Intuitive control over Logic Pro X, GarageBand, and Ableton Live
TRS pedal input, assignable to sustain
USB 2.0 bus powered
Can be used as a generic MIDI controller
Software bundle included
Comes with all the software you need to get started making music Included software:
As one of the inventors the Musical Instrument Digital Interface, Roland has continued to push the boundaries of the now 36-year old protocol(!) by continuously developing MIDI-based applications which bring totally new creative opportunities to musicians. One such application is the Roland AE-05 Aerophone GO, a unique digital wind instrument which uses MIDI (and Audio) over Bluetooth to dramatically expand the playing experience.
Connecting to a compatible iOS or Android mobile device using Bluetooth allows the Aerophone GO to interact with a range of apps including Roland’s own Aerophone GO Plus and Aerophone GO Ensemble.
With Aerophone GO Plus, a player gains 50 new sounds triggered by MIDI over Bluetooth and can jam along to their favorite songs from their smartphone. In addition to an integrated metronome, the app also allows for customizing the connected Aerophone to suit the player’s technique, with all changes being communicated by MIDI over Bluetooth.
A second app, Aerophone GO Ensemble, connects up to 7 players with a single mobile device for group performance using a common bank of sounds, all facilitated by MIDI over Bluetooth. Whether the application is a lesson with teacher, a duo performance, or a complete ensemble, MIDI over Bluetooth supports a unique wireless playing experience that would have been difficult to imagine 30+ years ago!
Not only the volume but also the sound itself is dynamically affected by the force with which you blow into the mouthpiece and the strength with which you bite it, providing a natural and richly expressive sound.
by Roland
The Aerophone has tons of internal sounds and built-in speakers, but it is also a great MIDI controller. Here are some of the parameters you can control on the Aerophone AE-10. The Bite Sensor can control pitch and vibrato. The strength of your breath effects not only volume, but other parts of the sounds
Recently Ableton announced a free update to Live – Version 10.1
There were a number of workflow improvements , but one of the major new features is the Wavetable synthesizer now supports user wavetables. This allows you to import any wavetable or sample and use it as an oscillator.
Check out this Youtube video of everything that’s new in Live 10.1.
Wavetable synth architecture
Wavetable has a dual-oscillators plus a sub-oscillator this feeds into a 2-pole lowpass filter with five different types of resonant multimode filters available for each of its two filters: Clean , OSR (based on the Oscar), MS2 (a model of the Korg MS20), PRD (based on the Moog Prodigy) and SMP (a variation of the Sallen-Key topology). The MS2, PRD, SMP, and OSR modes are switchable between lowpass and highpass, with variable Drive for adding grit.
There are tons of preset wavetables already organized into categories- Basics, Collection, Complex, Distortion, Filter, Formant, Harmonics, Instruments, Noise, Retro, and Vintage. You can pretty much guess what is in the Presets from the category names.
Wavetable synthesis was used in Ensoniq, Korg, PPG and many other synthesizers. It can also do FM-like synthesis.
Wavetable synthesis is fundamentally based on periodic reproduction of an arbitrary, single-cycle waveform.[5] In wavetable synthesis, some method is employed to vary or modulate the selected waveform in the wavetable. The position in the wavetable selects the single cycle waveform. Digital interpolation between adjacent waveforms allows for dynamic and smooth changes of the timbre of the tone produced. Sweeping the wavetable in either direction can be controlled in a number of ways, for example, by use of an LFO, envelope, pressure or velocity.
by Wikipedia
FM: This mode applies an FM modulator to the wavetable, with visual feedback so you can see the results. In this mode, the two adjustable parameters are tuning and amount.
You can achieve familiar FM effects by starting with the Sines 1 table in the Harmonics category (with a wave position of zero; pure sine), then adjusting the modulation amount parameter with an envelope. The tuning hot spots, where the FM effect retains harmonic coherence (without dissonant artifacts), are -100%, -50%, 0, 50%, and 100%. These correlate with ratios of 0.25:1, 0.5:1, 1:1, 2:1 and 4:1, respectively. Between those values, the Sines 1 sine wave is a fantastic resource for organic bell and mallet textures. Because FM is more controllable with simple carrier waveforms, complex wavetables will yield results that are more unpredictable.
by Ableton’s Lead preset designer and soundteam member Huston Singletary
Wavetable’s enevelopes give you temporal control over the shape of the sound. Envelop 2 is a very typical acoustic sound that might be used for a Piano. Envelope 3 is a very short percussive sound.
One of my favorite techniques is to apply velocity to envelope 2 or 3’s peak parameter, which serves to tie that envelope’s modulation amount to the impact of hitting a key or Push pad.
by Ableton’s Lead preset designer and soundteam member Huston Singletary
Of course Wavetables really come alive when you move through the single cycle wave forms which creates timbral changes. The Prophet VS and PPG were some vintage synths that really showed these capabilities off.
One of my favorite techniques for adding vintage animation to our wavetables is to modulate the PW parameter gradually for only one oscillator with a very slow triangle or sine LFO playing against a second oscillator, with Osc 2’s PW base value set to none or its FM amount slightly raised.
by Ableton’s Lead preset designer and soundteam member Huston Singletary
Ableton of course added other features to Ableton 10.1 including a Channel EQ.
There is a new Delay effect with both a the Simple Delay and Ping Pong Delays with controls for Jump, Fade-In, and Pitch.
New automation features
Musicians get a palette of automation shapes to choose from, as well as the ability to stretch and skew automation, enter values with the numerical keypad, and easier access to clip modulation in Session View. Live now also detects curved movements inside automation and can merge multiple breakpoints into C- and S-shapes.
New in Live: Explore a broader palette of sounds with a new synth, Wavetable. Shape your music with three new effects, Echo, Drum Buss and Pedal. Edit multiple MIDI clips from a single view and never lose a great idea again, with Capture MIDI.
Jordan Rudess of Dream Theater is bringing his KeyFest experience back to Sweetwater! With three days of jamming alongside, hanging out with, and learning from Rudess and guests David Rosenthal (Rainbow, Billy Joel, Cyndi Lauper) and Otmaro Ruíz (solo artist, John McLaughlin, Abraham Laboriel), KeyFest is an event no keys player should miss.
Call (260) 432-8176 x1993 to register.
MEET THE ARTISTS
JORDAN RUDESS
Jordan Rudess, best known as the keyboardist / multi-instrumentalist for platinum-selling, Grammy-nominated prog rock band Dream Theater, began his training at the world-renowned Juilliard School of Music at the age of nine. Since then, he has gone on to a distinguished and diverse career, gaining fans and recognition the world over, not to mention being voted Best Keyboardist of All Time (Music Radar magazine).
In addition to playing in Dream Theater, Jordan has also worked with a wide range of artists, including David Bowie, Enrique Iglesias, Liquid Tension Experiment, Steven Wilson, and the Dixie Dregs, among others. And Jordan’s interest in state-of-the-art keyboard controllers and music apps has also led to a successful career with his app development company, Wizdom Music. For more: jordanrudess.comwizdommusic.com
DAVID ROSENTHAL
Few musicians have achieved the broad-based success that David Rosenthal has earned as a musical director, keyboardist, synthesizer programmer, producer, orchestrator, and touring professional. Since graduating from Boston’s prestigious Berklee College of Music, David’s talents have been continually in demand with many of the most prominent artists in the world, including his long tenure as Keyboardist and Musical Director for Billy Joel, plus work with Bruce Springsteen, Elton John, Ritchie Blackmore and Rainbow, and Cyndi Lauper.
Besides recording and touring, David also continues to show a strong commitment to educating young musicians at such prestigious music schools as Berklee College of Music, Musicians Institute, and Full Sail University. Accordingly, Berklee has honored David with its Distinguished Alumni Award for Outstanding Achievements in Contemporary Music, and he was voted Best Hired Gun in Keyboardmagazine’s readers’ poll.
OTMARO RUIZ
Known for his versatility and virtuosity, Otmaro Ruíz is considered one of the most important jazz pianists in the scene today. With an intense musical career filled with concerts, workshops, and recordings worldwide, Otmaro has earned multiple Grammy nominations and awards, a Lifetime Special Award for International Exposure from the Venezuelan National Artists Institute (for outstanding career in a foreign country), and even an Honorary Doctorate Degree in Musical Arts from Shepherd University.
The long list of renowned musicians with whom Otmaro works constantly confirms his versatility. Among these amazing artists are John McLaughlin, John Patitucci, Jing Chi, Frank Gambale, Peter Erskine, Dave Weckl, Robben Ford, and Vinnie Colaiuta, making it easy to see why he is regarded as one of the most sought-after keyboardists in the world today.
Yamaha originally launched the Soundmondo website and mobile app in 2015 for the reface line of keyboards. It was one of the first major website to utilize Web MIDI.
Connect your reface keyboard to your computer, iPAD or phone, launch Chrome as your browser and you can browse sounds shared by other reface owners, You can create and share your sounds with people around the world.
There are over 20,000 free reface sounds available online.
“Soundmondo is to sound what photo-sharing networks are to images.It’s a great way to share your sound experiences and get inspiration from others.”
by Nate Tschetter, marketing manager, Synthesizers, Yamaha Corporation of America.
Yamaha has since expanded SoundMondo to include other Yamaha keyboards including the Montage. MODX and CP88/73 stage pianos.
So exactly how does social sound sharing work? Well, it’s actually pretty simple. You select your instrument and then you can browse by tags so for example all the sounds that have the tags 2000s, EDM and Piano.
Select the sound and it is sent from the Soundmondo server to your browser and from your browser to your keyboard where you can play. If the synth or stage piano can store sounds, you can store the sound locally on your keyboard. Using the SoundMondo iOS app, you can create set lists and organize your sounds for live performance.
When Yamaha launched Soundmondo compatibility for Montage they produced 400 MONTAGE Performances, including content from the original DX ROM Cartridges, special content from Yamaha Music Europe and 16 original Performances from legendary synthesizer sound designer Richard Devine.
You can see Richard’s performance using the Montage and Richard’s modular setup at Super Booth 2018.
We’re in a golden age of sampled instruments; these days, you can find realistic-sounding samples of everything, including drums. Back in the day, programmed drums sounded artificial and mechanical. Today, drums only have to sound that way if you want them to — and that sound is perfect for certain tracks! But assuming you want realistic-sounding sampled drums for your productions, here are six tips on how to program your drums to sound more lifelike.
1. Sonic Variation
When a drummer attacks the skins, each hit sounds a bit different. He or she hits the drumhead in a slightly different location each time, the sticks hit at different angles, the velocity and power are a bit different, and there are differences between right- and left-hand strokes — even when playing just one drum. All of these things make a difference in the tone that is produced by the drum and contribute to the instrument sounding “live.” To emulate this, make sure that each drum is represented by more than one sample — and while this is critical for preventing “machine-gun drum rolls,” it’s important every time a virtual drum is “hit.” These days, many dedicated drum software instruments will handle mixing up samples automatically. But it can also be done by varying which sample is played based on the velocity of the hit. Many samplers and virtual instruments allow you to set up multiple samples in a round-robin, meaning that the sampler will choose a sample at random for each hit. If your instrument doesn’t support this, you can use an LFO tied to velocity or even to a filter, an EQ, a pitch shifter, or another processor to subtly alter the pitch, tone, or shape of a triggered drum, to add variation.
2. Groovical Variations
Nothing makes programmed drums sound mechanical more than having every hit land exactly on a quantized grid. It’s an instant recipe for rigid, robotic, metronomic drums with no “groove.” Even the best human drummer playing along with a click track has slight variations in timing, coming in slightly ahead of or behind the beat, etc. — and they’ll often do this intentionally to either drive a part forward or to lay it back. A drummer may even push certain drums forward and pull others back at the same time to create a certain groove. If your drum software has a “humanize” function, that may add just the right amount of slight variation that won’t make any hit sound out of time, but will make it just off the grid enough to sound more alive. If there isn’t a humanize function, you can duplicate the effect manually by pulling individual drums or hits a few clicks ahead or behind the beat. Some DAWs and drum softwarealso offer “groove” functions that allow you to apply a particular “feel” to your MIDI tracks. To make this easy, you might want to break the MIDI tracks that drive the drums out to individual tracks (a separate MIDI track for the kick, one for the snare, one for hi-hat, and so on), so you can adjust them independently.
3. The Rare and Unique Three-armed Drummer
Most drummers have two arms and two feet. That means that at any given point in time, they’re only going to be able to play two hand-struck and two foot-struck drums or cymbals. When you’re going for realism, remember that a drummer can’t be playing a two-handed hi-hat pattern at the same time they’re doing a two-handed tom fill. Or playing a double-kick pattern and a pedaled hi-hat pattern together. They can’t strike two toms and a cymbal simultaneously. Having too many instruments attack at the same time is a dead giveaway that a part is programmed and not “real.” Study the patterns and rhythms of real drummers to see how they’re making the most of their four limbs, and make sure you don’t “improve” on a human drummer by programming an extra arm or foot!
4. Moving in Stereo
When you hear drums live or record a live drummer, there is a natural stereo field created by the drum set’s physical positioning. Imagine standing dead center in front of the kit; some of the drums will be to the left of the kick drum, others to the right of the kick drum. If you place each drum in the stereo field the way that a real drum kit is set up, it will add a realistic sense of space to the kit. There are two “perspectives” you can use for this: the drummer’s perspective looking at the kit (for a right-handed drummer, the hi-hat will be to the left, the floor toms to the right) and the audience perspective looking at the kit (a right-handed drummer will have the hi-hat on the right and the floor toms on the left). Either perspective is correct and fine; choose the one you prefer or that works best for your song. Also, if you have stereo overheads on the kit, make sure that the panning within those overheads is matched by the panning of the individual drums in the stereo field (if the hi-hat is halfway to the left in the overheads, the hi-hat track should also be panned halfway to the left), otherwise the instruments will not localize correctly in the speakers and may sound “smeared.”
5. Make Room for the Drums
Real, physical drums have weight and take up space in the room. When you hit them, the sound bounces around the room, creating a natural ambience. That ambience will certainly be picked up if there are “room” mics, but the ambience is also audible in the overhead mics and even in the close mics on the drums. You may not immediately notice it, but if it’s gone, you can tell the difference. Some drum samples include the ambience of the room they were recorded in or allow you to add it into the final mix. For those that were recorded dry, add a very slight amount of a room-type reverb to the drums, not enough to be heard as an effect, but enough to give the drum sounds a sense of space. Note that this is not the same thing as reverb processing you add for effect. You may, for example, include a room reverb for subtle ambience, and still use a gated reverb or a big plate reverb to create a special effect.
Established in 2009, ROLI is creating the future of musical instruments. From next-generation keyboards like the Seaboard to the modular music-making devices of BLOCKS, ROLI instruments are deeply expressive and intuitive to play. They are so versatile that they can sound like anything and be played anywhere.
Technologically advanced touch interfaces make every movement musical on the Seaboard GRAND, Seaboard RISE, Seaboard Block, Lightpad Block, NOISE app, and ROLI PLAY app — part of a growing family of ROLI products that are extending the joy of making music to everyone.
ROLI Song Maker Kit
The ROLI Songmaker Kit is an incredibly high-powered yet flexible music creation kit — and the newest product from ROLI. Combining the expressive power of the Seaboard Block, Lightpad Block, and Loop Block, it gives you everything you need to make a track anywhere.
It’s more than the sum of its parts. Play the Blocks together as an integrated controller, or play each Block by itself. Connect the kit to your favorite software, and map effects and functions to the incredibly responsives surfaces of the Lightpad and Seaboard Block. The huge software package includes Equator, Tracktion Waveform, and Ableton Live Lite (Ableton is also a May MIDI Month platinum sponsor).
Roli and Ableton Live Lite
Ableton Live, the high-powered digital audio workstation (DAW) and sequencer, is a staple in music production. Combining tools for composing, recording, beat-matching and crossfading, Ableton Live’s versatility has made it a favorite of both producers and performers. Now all Lightpad Blocks — including the new Lightpad Block M — seamlessly integrate with Ableton Live. And all Lightpad owners get Ableton Live 9 Lite for free! So you can enjoy the dynamism of Ableton Live and control the DAW in a totally new way.
Brothers Marco and Jack Parisi recreate a Michael Jackson classic hit
Electronic duo PARISI are true virtuosic players of ROLI instruments, whose performances have amazed and astounded audiences all over the world — and their latest rendition of Michael Jackson’s iconic pop hit “Billie Jean” is no exception.
Roli and MPE
ROLI has been an important contributor to MIDI and helped to make MIDI Polyphonic Expression (MPE) a new part of the MIDI standard. Check out this article as MIDI Association advisory board member and MIDI Month Tip contributor Craig Anderton explains MPE and the links to the MPE coverage on MIDI.org.
MIDI Polyphonic Expression (MPE) is a technological breakthrough for today’s musicians, and one of the unique aspects of this emerging category that it works interdependently across hardware and software. Built on the original MIDI specification, MPE-compatible software programs provide new ways to define notes and performance gestures. MPE-compatible hardware controllers offer innovative interfaces that let musicians engage with all of the extra expressiveness facilitated by the software.
One of the biggest recent developments in MIDI is MIDI Polyphonic Expression (MPE). MPE is a method of using MIDI which enables multidimensional controllers to control multiple parameters of every note within MPE-compatible software…
Celemony Melodyne has one foot in audio, but the other in MIDI because the analysis that it runs on audio ends up being easily converted to MIDI data. If you can sing with consistent tone and level, Melodyne can convert your singing into MIDI. The same functionality for monophonic tracks exists in many DAWs.
MIDI data has been extracted from the guitar track at the top, and is now being edited in a piano roll view editor.
This has other uses, too. For example if you’re a guitar player and want a cool synth bass part, you can record the bass part on your guitar, extract the MIDI notes using Melodyne’s analysis (how you do this varies among programs, but it may be as simple as dragging an audio track into a MIDI track), transpose the notes down an octave, and drive a synth set to a cool bass sound. You may need to do a little editing, but that’s no big deal.
Here are some videos on how to do the same thing in our Platinum sponsor’s DAW- Ableton Live.
Audio to MIDI in Ableton
Here is a link to a more detailed article on how to convert Audio to MIDI in three different DAW-Ableton, Cubase and Sonar.