fbpx
Skip to main content

Synthogy Releases Ivory 3 American Concert D

Synthogy has released  Ivory 3 American Concert D, the 2nd virtual piano for its Ivory 3 platform.  Ivory 3 American Concert D features their legendary recordings of a vintage New York Steinway D Concert Grand, now fully realized through the power of the Ivory 3 engine.

This 1951 New York Steinway D (CD 121) was handpicked by Steinway & Sons Concert & Artist department for artist promotion. The instrument has received praise over the years from some of the world’s greatest concert artists, with the signatures of masters like Glenn Gould and Rudolph Serkin gracing its plate. 

Powered by Ivory 3’s RGB engine, the Continuous Velocity feature provides smooth, seamless velocity-to-timbre on every piano strike. This technology, unique to Synthogy, behaves like modeling yet maintains the complex, rich and realistic sound of real-world instruments.  It is the foundation of a new generation of expressive capabilities in Ivory 3.

The Ivory 3 American Concert D instrument also features 4 stereo microphone positions. Multiple close and ambient choices are instantly available for detailed presence and depth of field. The on-board Ivory 3 mixer enables fine levels of control over the sound.  

In addition, the Ivory 3 engine is MIDI 2.0 ready for support of MIDI 2.0 high resolution velocity. 65,536 velocity-to-timbre levels are manifested by the RGB engine, opening the door to new possibilities of musical expression with myriad degrees of tone color.

Ivory 3 American Concert D is available now for Windows and Mac, as a digital download purchase direct from Synthogy’s on-line web store, or from any of its authorized resellers.

Upgrades are available for Ivory II American Concert D owners.

Ivory 3 American Concert D MSRP $249 USD
Ivory 3 American Concert D Upgrade (from Ivory II American Concert D) MSRP $139 USD

For more info visit:

www.synthogy.com

Watch the premier video of Ivory 3 American Concert D with Geoffrey Gee live in studio:

Microsoft’s Major Moves To Make Making Music on Windows Easier

Pete Brown is both the MIDI Association Exec Board Chair and a Principal Software Engineer in the Windows Developer Platform team at Microsoft. He focuses on client-side dev on Windows, apps and technology for musicians, music app developers, and music hardware developers, and the Windows developer community. 

For musicians, there are two key enabling technologies for making music – MIDI and Audio.  Microsoft announced updates to both are coming soon to Windows. 

Microsoft announced updates to both MIDI and Audio at the Qualcomm Snapdragon Summit 2024.

What did we announce today for musicians and other audio professionals?

Musician Software coming to Arm64
Steinberg Cubase and Nuendo in preview this week
Cockos Reaper in preview today
Reason Studios Reason in preview in early 2025

Audio Hardware coming to Arm64
Vendor-specific USB Audio / ASIO driver preview from Focusrite early in 2025
Vendor-specific USB Audio / ASIO driver preview from Steinberg/Yamaha in 2025

In-Box Support coming to Arm64
ASIO and low-latency USB Audio Class 2 driver previews mid 2025, in-box in Windows when complete
MIDI 2.0 (Windows MIDI Services) previews in Windows Insider builds this November, in-box in retail Windows early next year.

From my own use and from working with others in the music industry, I know we need to have support for two major features on Windows for musicians to have a great experience:

Better APIs which support MIDI, including MIDI 2.0, with backwards compatibility with MIDI 1.0 APIs and devices. Our older MIDI stack hasn’t kept up with current needs, and needed replacing so that we can grow and innovate.

Full support for low-latency, high-channel-count audio, using standards already accepted by the industry

Pete Brown-Microsoft

Windows MIDI Services: The New Windows MIDI Stack

Windows MIDI 2.0 Detailed Architecture picture
Windows MIDI Services supports MIDI 1.0 as well as the MIDI 2.0 Universal MIDI Packet (UMP) standard. Together this provides compatibility with existing MIDI devices as well as the new MIDI 2.0 devices already in-market or coming soon (I have several MIDI 2.0 devices here in my studio).

MIDI CI, which bridges the gap between MIDI 1.0 and MIDI 2.0 UMP, is supported through normal SysEx support, and we recommend the use of open source cross-platform libraries which help with creating and parsing those messages. 

Pete Brown-Microsoft

Backwards compatibility with the WinMM MIDI 1.0 API

AMEI to Fund Open SourceMIDI 2.0 Driver for Windows

The Association of Musical Electronics Industries (AMEI), the organization that oversees the MIDI specification in Japan, committed to funding the development of an open-source USB MIDI 2.0 Host Driver for Windows Operating Systems under a memorandum of understanding between AMEI, AmeNote Inc, and Microsoft.

AMEI is underwriting the cost and has engaged AmeNote Inc. to develop the driver because of AmeNote’s extensive experience in MIDI 2.0 and USB development. In addition, concurrent to this, Microsoft has also agreed to start development of a Windows standard open-source MIDI 2.0 API.

The driver and API will be developed in accordance with Microsoft’s quality control standards, and will be managed as a permissively licensed (MIT license) Microsoft open-source project. As a result, anyone can participate in the development as an open-source contributor in the future, or use the code in their own devices or operating systems. Because of this open source arrangement, continuous and timely improvements and enhancements to the USB MIDI 2.0 Host driver and MIDI 2.0 API are expected.

Here is a list of the AMEI companies supporting this work.

l   AlphaTheta Corporation

l   INTERNET Co., Ltd.

l   Kawai Musical Instruments Manufacturing Co., Ltd.

l   CRYPTON FUTURE MEDIA, INC.

l   CRIMSON TECHNOLOGY, Inc.

l   KORG INC.

l   Educational Corporation Shobi Gakuen

l   SyncPower Corporation

l   ZOOM CORPORATION

l   -SUZUKI MUSICAL INST. MFG. CO.,LTD.

l   TEAC CORPORATION

l   Yamaha Corporation

l   Yamaha Music Entertainment Holdings, Inc.

l   Roland Corporation

l   Analog Devices, K.K.

For the full blog from Pete on the New Windows MIDI Stack, please click on the link below.

https://devblogs.microsoft.com/windows-music-dev/windows-midi-services-oct-2024-update/

Make Great Music with Windows on Arm

We’ve recently kicked off a project with Qualcomm and Yamaha to create a brand new USB Audio Class 2 Driver in Windows, with both WaveRT (our internal audio) and ASIO interfaces, following the latest standards for Windows driver development using the ACX framework.

The new driver will support the devices that our current USB Audio Class 2 driver supports, but will increase support for high-IO-count interfaces with an option for low-latency for musician scenarios.

It will have an ASIO interface so all the existing DAWs on Windows can use it, and it will support the interface being used by Windows and the DAW application at the same time, like a few ASIO drivers do today. And, of course, it will handle power management events on the new CPUs.

This driver will work with USB Audio Class 2 devices, so you can plug in your device, and get right to making music.

Finally, we’ll make the class driver source available to others on GitHub, just like we have with MIDI, so that any company creating their own USB Audio Class 2 drivers will be able to learn from how we handled events and also give us suggestions for how we could do better. It’s a two-way conversation.

Pete Brown-Microsoft

Announcing: Hardware-optimized USB Audio drivers on Arm64

Our new in-box driver needs to work well for all compliant USB Audio Class 2 devices. But some hardware developers are expert driver authors, and for years have known that if they write their own optimized drivers for their USB Audio Interfaces, even on other platforms with built-in drivers and low-latency APIs, they can achieve even better round-trip latency at the same levels of stability. Every millisecond counts!

Pete Brown-Microsoft

Focusrite – Native on Arm64

“Focusrite is targeting releasing native Arm64 drivers for all of its supported USB audio interface products in early 2025, bringing compatibility with all ASIO and non-ASIO applications running on the platform.”

Tim Carroll, CEO Focusrite Group

CEO Focusrite Group, President of The MIDI Association

Yamaha – Native on Arm64

Yamaha creates the Steinberg-branded USB audio interfaces, which are fantastic performers on Windows and loved by their customers. In addition to working on the in-box class driver for Arm64, they are going to release optimized device-family versions of their audio interface drivers for Windows on Arm, giving users of their devices the best of both worlds.
We’re excited to see these drivers coming out for Arm64 in 2025!

Pete Brown- Microsoft

Announcing: New Musician-focused apps coming to Arm64

With the new MIDI stack and in-box ASIO, these three killer DAW apps, and two families of audio interfaces with optimized drivers for Arm64, we’re set up to help make the experience of creating music amazing on Windows. I am beyond excited for so many of these efforts to come together at this point in time. A huge thanks to all our hardware and software partners who have stepped up to help musicians and other audio creators on Windows.

Pete Brown-Microsoft

Cubase x Snapdragon: Redefining mobile music production

For the full blog from Pete on Make Great Music with Windows on Arm, please click on the link below.

https://devblogs.microsoft.com/windows-music-dev/making-music-on-windows/

Digishow-Jam With Everything

At the MIDI Forum, there were a number of technology presentations and one of the most fascinating was from Robin Zhang about the open source software he developed called Digishow.

Robin runs a creators’ collective in a beautiful old building that used to be the home of the Lester School and Technical Institute. They get diverse people from different backgrounds (musicians, lighting, artists, designers) to work together and create unique pieces of art using Digishow.

DigiShow is a lightweight control software designed for live performances and immersive show spaces with music, lights, displays, robots and interactive installations. It serves as an easy-to-use console for signal controlling, also enables signal mapping between MIDI, DMX, OSC, ArtNet, Modbus, Arduino, Philips Hue and more digital interfaces.

With using DigiShow LINK app, there are some scenarios assumed:

Producers: For live music or theatre performances, DJ or producers can arrange show lighting cues and stage automations on MIDI tracks alongside the music tracks in Ableton Live or other DAW. At the show, press the button on the Launchpad, the music loop and lighting effects will be instantly played in sync.

Ableton Live with tracks programmed for Digishow
Picture of magnetic and MIDI demo with lightshow

Performers: When playing MIDI instruments like drums or keyboards, DigiShow can trigger dynamic lighting changes and even robotic movements by MIDI notes following the beat or the music. Sensors can also be added to acoustic or DIY instruments to automatically generate MIDI notes.

Artists and Designer: For building interactive art installations, the creators often need to make software that works with the hardware. DigiShow provides OSC, ArtNet, WebSocket pipes for inter-application communication. Designers can create their interactive content in some creative software like TouchDesigner, Unity 3D, P5.js and access the hardware easily through DigiShow. Developers can also program using Python or JavaScript to connect DigiShow and extend interaction logic.

Storefront Display programmed by Digishow and simulated in Touch Designer

Makers and Hobbyists: DigiShow is for all show makers as well as hobbyists with little professional skills. Make digital shows for your own party time, or just make your house into a mini ‘disneyland’.

GeoShred Studio for MacOS Released

GeoShred introduces a new paradigm for musical instruments, offering fluid expressiveness through a performance surface featuring the innovative “Almost Magic” pitch rounding. This cutting-edge software combines a unique performance interface with physics-based models of effects and musical instruments, creating a powerful tool for musicians. Originally designed for iOS devices, GeoShred is now available as an AUv3 plug-in for desktop DAWs, expanding its reach and integration into professional music production workflows.

GeoShred Studio, an AUv3 plug-in, runs seamlessly on macOS devices. Paired with GeoShredConnect, musicians can establish a MIDI/MPE connection between their iOS device running GeoShred and GeoShred Studio, enabling them to incorporate GeoShred’s expressive multi-dimensional control into their desktop production setup. This connection allows users to perform and record tracks from their iOS device as MIDI/MPE, which can be further refined and edited in the production process.

iCloud integration ensures that preset edits are synchronized between the iOS and macOS versions of GeoShred. For example, a preset saved on the iOS version of GeoShred automatically syncs with GeoShred Studio, providing a seamless experience across platforms.

Equipped with a built-in guitar physical model and 22 modeled effects, GeoShred Studio offers an impressive array of sonic possibilities. For those looking to expand their musical palette, an additional 33 physically modeled instruments from around the globe are available as in-app purchases (IAPs). These instruments range from guitars and bowed strings to woodwinds, brass, and traditional Indian and Chinese instruments.

GeoShred Studio is designed to be performed expressively using GeoShred’s isomorphic keyboard.

For users who don’t own the iOS version, the free GeoShred Control MPE controller (https://apps.apple.com/us/app/geoshred-control/id1336247116) is available for use with GeoShred Studio.

GeoShred Studio is also compatible with MPE controllers, conventional MIDI controllers, and even breath controllers, offering a wide range of performance options. GeoShred Studio is free to download, but core functionality requires the purchase of GeoShred Studio Essentials, which includes distinct instruments separate from those in the iOS/iPadOS app, and iOS/iPadOS purchases do not transfer.

Works with MacOS Catalina or greater.

GeoShred, unleash your musical potential!

We are offering a 25% discount on all iOS/iPadOS and MacOS products in celebration of GeoShred 7, valid until October 10, 2024. Pricing table at moforte.com/pricing


AudioCipher Technologies has just announced the release of Version 4.0: The MIDI Vault.

Founded in 2020, the company has slowly made a name for itself as the only text-to-MIDI chord and melody generator on the market. Their latest plugin improves on the classic algorithm with new chord inversion and note-joining buttons, producing better voice leading and compositions. 

The MIDI generator has also acquired a new Save button that acts as a bridge into the new MIDI and audio file management tool called the MIDI Vault. 

The MIDI Vault: A New Type of MIDI File Manager

The MIDI Vault: A New Type of MIDI File Manager

AudioCipher’s MIDI Vault is the first file manager to bundle MIDI and audio files together in cards. Each of these cards are endowed with music metadata like BPM, key signature, genre, mood, type, and rating.

The Vault’s search and sorting options make it easier to find those cards during future sessions. It includes MIDI and audio file playback right there on the card, with the option to drag and drop files right into a DAW’s audio timeline.

As we know, virtual instruments have a big impact on the way MIDI chords and melodies sound. Often when composing for MIDI, musicians have a specific instrument or set of instruments in mind. Bounce each variation as an audio file and store them in a single card.

Cards can hold a near-unlimited number of MIDI and audio files. Use the card’s “notes” section to jot down import information about the instruments or effects that were used. 

Files are stored locally on a hard drive, not on the cloud. Export the vault’s entire card collections as a single .AUCI file and store them on external hard drives to save space. Share those AUCI collections with other AudioCipher users as well. 

Erase the collection and start fresh at any time, knowing that it’s easy to re-import and integrate any collection with the click of a button.

New Text-to-MIDI Generation Features

AudioCipher began as a text-to-MIDI generator and the company has continued to improve on that algorithm in the latest version. The app converts words to MIDI using a musical cryptogram, comparable to the Engima machine. 

Users choose a key signature (scale/mode) and BPM, along with optional chord extension settings. A rhythm slider is provided to control or randomize note durations, including triplets.

When users find an idea they like, the save button spawns a new card in the MIDI vault. Their MIDI file is added automatically, carrying its BPM and scale metadata along with it.The quality of AudioCipher’s output has increased significantly thanks to new inversion and note-joining options.

Visit the audiocipher homepage to learn more: https://www.audiocipher.com.

Caedence: browser-based music collaboration and performance software

Caedence is a browser-based music collaboration and performance software that allows people to sync and customize virtually every aspect of performances across devices – in real time – to help them learn faster, play better, and create amazing performances with less time, money, and effort. 

Now in open beta, Caedence began as a passion project, but with the help and support of the MIDI Association has grown a lot.

How It All Started

It was 2018 in Minneapolis, Minnesota. Caedence founder Jeff Bernett had just joined a new six-person cover band and taken on the role of de facto Musical Director. The group had enormous potential, but also very limited time to prepare three hours of material for performance. Already facing the usual uphill battle of charting songs and accommodating learning styles, Jeff’s challenge was further complicated by a band leader who insisted on using backing tracks – famous for making live performance incredibly unforgiving.

Jeff knew of a few existing solutions that could help. But nothing got to the heart of the issue his group was experiencing: the jarring and stifling disconnect between individual practice, band rehearsal, and live performances. This disconnect is known and felt by all musicians. So why wasn’t there anything in the market to address it? What solution could simplify the process of learning music, but also enhance the creative process and elevate live performances – all while being easily accessible and simple to use? Enter the idea for Caedence – a performance and collaboration software that would allow musicians to practice, rehearse, and perform in perfect sync.

Finding The MIDI Association

Energized about creating a solution that could revolutionize music performance, Jeff, along with partners Terrance Schubring and Anton Friant, swiftly created a working prototype. After successfully sending MIDI commands from Caedence to control virtual & hardware instruments, guitar effect pedals, and stage lighting, the team realized that they truly had something great on their hands. MIDI was the catalyst that transformed Caedence from a useful individual practice tool into a fully conceived live music performance and collaboration solution.

Jeff had previously joined the MIDI Association as an individual, all the while connecting with other members to learn as much as he could. His enthusiasm attracted Executive Board member Athan Billias, who reached out to learn more about what Jeff was working on. After connecting, it was immediately clear that Caedence and the MIDI Association had natural synergy. Caedence soon joined as a Corporate Member, and Athan generously took on an unofficial advisory role for the young startup.

A Transformative Collaboration

Joining the MIDI Association was a game-changer for Caedence – both for the software itself and the Caedence team. With access to the Association’s wealth of knowledge and resources, the Caedence team was able to fix product bugs and create features they hadn’t even considered before.

With the software in a good place, Caedence was ready for a closed beta release. In an effort to sign up beta testers, the team headed to the NAMM Show in 2023 as part of the MIDI Association cohort. Attendees were attracted to the Caedence booth – its strong visuals and interactive nature regularly drawing a crowd to the area. The team walked people through the features of the platform, demonstrating how it could help musicians learn faster, play better, and create more engaging performances.

And then an unexpected thing happened. A high school music teacher from Oregon with a modern band program approached the team and asked about using Caedence in the classroom. What followed was a series of compelling conversations – and the identification of a new market for Caedence.

Open Beta and Beyond

In July of 2024, Caedence reached a huge milestone. The software began its open beta, ready for a broader audience and the feedback that will come with it. Schools across the country are ready to leverage Caedence in the 2024-2025 school year. You can sign up for the open beta on the Caedence website.

For Minneapolis makers at the intersection of tech, art, music, and education

Conferences are costly. Networking is lame. Happy hours are fun, but often less than productive. So Caedence built something different.

Caedence is also hosting its first ever event called WAVEFRONT on August 1st in Minneapolis, an opportunity for local makers at the intersection of tech, art, music, and education to exchange ideas and encourage community amongst established and emerging talent alike.

WAVEFRONT is a bespoke meeting of innovators, educators, entrepreneurs and artists, hosted in an environment purpose-built to facilitate the exchange of ideas and encourage community – amongst established and emerging talent alike. 

WAVEFRONT is sponsored by several MIDI Association companies.

If you would like to learn more about WAVEFRONT please visit wavefrontmn.com.

ShowMIDI: effortlessly visualize MIDI activity


ShowMIDI is a multi-platform GUI application to effortlessly visualize MIDI activity, filling a void in the available MIDI monitoring solutions.

Instead of wading through logs of MIDI messages to correlate relevant ones and identify what is happening, ShowMIDI visualizes the current activity and hides what you don’t care about anymore. It provides you with a real-time glanceable view of all MIDI activity on your computer.

When something happens that you need to analyze in detail, you can press the spacebar to pause the data and see a real-time static snapshot. Once you’re doing, press the spacebar again and ShowMIDI resumes with the latest activity.

This animation shows the difference between a traditional MIDI monitor on the left and ShowMIDI on the right: 


Open-source and multi-platform

ShowMIDI is written in C++ and JUCE for macOS, Windows and Linux, an iOS version is in the works. You can find the source code in the GitHub repository.

Alongside the standalone application, ShowMIDI is also available as VST2, VST3, AUv2, AUv3, CLAP and LV2 plugins for DAWs and hosts that support MIDI effect plugins. This makes it possible to visualize MIDI activity for individual channels and to save these with your session.


Introduction and overview

Below is an introduction video that shows how the standalone version of ShowMIDI works. You get a glimpse of what the impetus for creating this tool was and how you can use it with multiple MIDI devices. Seeing the comparison between traditional MIDI monitor logs (including my ReceiveMIDI tool) and ShowMIDI’s visualization, clearly illustrates how the information becomes much easier to understand and consume.


Smart and getting smarter

ShowMIDI also analyzes the MIDI data and displays compound information, like RPN and NRPN messages that are constituted out of multiple CC messages. RPN 6, which is the MPE configuration message, is also detected and adds MPE modes to the channels that are part of an MPE zone.

This is just the beginning, additional visualizations, smart analysis and interaction modes will continue to be added. As MIDI 2.0 becomes more widely available, ShowMIDI will be able to switch its display mode to take those messages into account too.



AudioCipher V3: The Word-to-MIDI Melody and Chord Progression Generator

MIDI Association partner AudioCipher Technologies has just published Version 3.0 of their melody and chord progression generator plugin. Type in a word or phrase and AudioCipher will automatically generate MIDI files for any virtual instrument in your DAW. AudioCipher helps you overcome creative block with the first ever text-to-MIDI VST for music producers.

Chord generator plugins have been a hallmark of the MIDI effects landscape for years. Software like Captain Chords, Scaler 2, and ChordJam are some of the most popular in the niche. Catering to composers, these apps tend to feature music theory notation concepts like scale degrees and Roman numerals. They provide simple ways to apply chord inversions, sequencing and control the BPM. This lets users modify chord voicings and edit MIDI in the plugin before dragging it to a track.

AudioCipher offers similar controls over key signature, scale selection, chord selection, rhythm control, and chord/rhythm randomization. However, by removing in-app arrangement, users get a simplified interface that’s easier to understand and takes up less visual real estate in the DAW. Continue your songwriting workflow directly in the piano roll to perform the same actions that you would in a VST.

AudioCipher retails at $29.99 rather than the $49-99 price points of its competitors. When new versions are released, existing customers receive free software upgrades forever. Three versions have been published in the past two years. 

Difficulty With Chord Progressions

Beginner musicians often have a hard time coming up with chord progressions. They lack the skills to experiment quickly on a synth or MIDI keyboard. Programming notes directly into the piano roll is a common workaround, but it’s time consuming, especially if you don’t know any music theory and are starting from scratch.

Intermediate musicians may understand theory and know how to create chords, but struggle with finding a good starting point or developing an original idea.

Common chord progressions are catchy but run the risk of sounding generic. Pounding out random chords without respect for the key signature is a recipe for disaster. Your audience wants to hear that sweet spot between familiarity and novelty.

Most popular music stays in a single key and leverages chord extensions to add color. The science of extending a chord is not too complicated, but it can take time to learn.

Advanced musicians know how to play outside the constraints of a key, using modulation to prepare different chords that delight the listener. But these advanced techniques do require knowledge and an understanding of how to break the rules. It’s also hard to teach old dogs new tricks, so while advanced musicians have a rich vocabulary, they are at risk of falling into the same musical patterns.

These are a few reasons that chord progression generators have become so popular among musicians and songwriters today. 

AudioCipher’s Chord Progression Generator

Example of AudioCipher V3 generating chords and melody in Logic Pro X

Overthinking the creative process is a sure way to get frustrated and waste time in the DAW. AudioCipher was designed to disrupt ordinary creative workflows and introduce a new way of thinking about music. The first two versions of AudioCipher generated single-note MIDI patterns from words. Discovering new melodies, counter-melodies and basslines became easier than ever.

Version 3.0 continues the app’s evolution with an option to toggle between melody and chord generator modes. AudioCipher uses your word-to-melody cipher as a constant variable, building a chord upon each of the encrypted notes. Here’s an overview of the current features and how to use them to inspire new music.

AudioCipher V3.0 Features

  • Choose from 9 scales: The 7 traditional modes, harmonic minor, and the twelve-note chromatic scale. These include Major, Minor, Dorian, Phrygian, Lydian, Mixolydian, and Locrian.
  • Choose from six chord types including Add2, Add4, Triad, Add6, 7th chords, and 9ths.
  • Select the random chord feature to cycle through chord types. The root notes will stay the same (based on your cryptogram) but the chord types will change, while sticking to the notes in your chosen scale.
  • Control your rhythm output: Whole, Half, Quarter, Eighth, Sixteenth, and all triplet subdivisions.
  • Randomize your rhythm output: Each time you drag your word to virtual instrument, the rhythm will be randomized with common and triplet subdivisions between half note and 8th note duration.
  • Combine rhythm and chord randomization together to produce an endless variety of chord progressions based on a single word or phrase of your choice. Change the scale to continue experimenting.
  • Use playback controls on the standalone app to audition your text before committing. Drag the MIDI to your software instrument to produce unlimited variation and listen back from within your DAW.
  • The default preset is in C major with a triad chord type. Use the switch at the top of the app to move between melody and chord generator modes.

How to Write Chord Progressions and Melodies with AudioCipher

Get the creative juices flowing with this popular AudioCipher V3 technique. You’ll combine the personal meaning of your words with the power of constrained randomness. Discover new song ideas rapidly and fine-tune the MIDI output in your piano roll to make the song your own.

  • Choose a root and scale in AudioCipher
  • Switch to the Chord Generator option
  • Select “Random” from the chord generator dropdown menu
  • Turn on “Randomize Rhythm” if you want something bouncy or select a steady rhythm with the slider
  • Type a word into AudioCipher that has meaning to you (try the name of something you enjoy or desire)
  • Drag 5-10 MIDI clips to your software instrument track
  • Choose a chord progression from the batch and try to resist making any edits at first

Next we’ll create a melody to accompany your chord progression.

  • Keep the same root and scale settings
  • Switch to Melody Generator mode
  • Create a new software instrument track, preferably with a lead instrument or a bass
  • Turn on “Randomize Rhythm” if it was previously turned off
  • Drag 5-10 MIDI clips onto this new software instrument track
  • Move the melodies up or down an octave to find the right pitch range to contrast your chords
  • Select the best melody from the batch

Adjust MIDI in the Piano Roll

Once you’ve found a melody and chord progression that inspires you, proceed to edit the MIDI directly in your piano roll. Quantize your chords and melody in the piano roll, if the triplets feel too syncopated for your taste. You can use sound design to achieve the instrument timbre you’re looking for. Experiment with additional effects like adding strum and arpeggio to your chords to draw even more from your progressions.

With this initial seed concept in place, you can go on to develop the rest of the song using whatever techniques you’d like. Return to AudioCipher to generate new progressions and melodies in the same key signature. Reference the circle of fifths for ideas on how to update your key signature and still sound good. Play the chords and melody on a MIDI keyboard until you have ideas for the next section on your own. Use your DAW to build on your ideas until it becomes a full song.

Technical specs

AudioCipher is a 64-bit application that can be loaded either as a standalone or VST3 / Audio Component in your DAW of choice. Ableton, Logic Pro X, FL Studio, Reaper, Pro Tools, and Garageband have been tested and confirmed to work. Installers are available for both MacOS and Windows 10, with installer tutorials available on the website’s FAQ page. 

A grassroots hub for innovative music software

Along with developing VSTs and audio sample packs, AudioCipher maintains an active blog that covers the most innovative trends in music software today. MIDI.org has published AudioCipher’s partnerships with AI music software developers like MuseTree and AI music video generator VKTRS.

AudioCipher’s recent articles dive into the cultural undercurrents of experimental music philosophy. One piece describes sci-fi author Philip K Dick’s concept of “synchronicity music”, exploring the role of musicians within simulation theory his VALIS trilogy. Another article outlines the rich backstory of Plantwave, a device that uses electrodes to turn plants into MIDI music.

The blog also advocates small, experimental software like Delay Lama, Riffusion and Text To Song, sharing tips about how to use and access each of them. Grassroots promotion of these tools brings awareness to the emerging technology and spurs those developers to continue improving their apps.

Visit the AudioCipher website to learn more. 

Introducing Bace: a Voice-to-MIDI Plug-in & Standalone App


Almost two years ago, we set out to build a tool that could capture an idea and turn it into workable music. After a long period of developing machine learning models and getting the UI just right, I’m incredibly proud to say that we are ready to put our creation out into the world. That idea I had while on tour with my band, sitting in the back of a van and wondering “how can I capture my drum beat idea?” is finally here.

Today I’m excited to announce the launch of Bace.

Simply put, Bace is a music production app & audio plug-in that uses AI machine-learning technology to control software and hardware instruments with your voice. ??? 

Features 

➡️ 4 Drum Tracks
Control up to 4 different drum tracks including Kick, Snare, Hi-Hat and a Percussive sound.

➡️ MIDI Control
Each Drum Track also doubles as a MIDI Track. Control one plugin or 4 separate plugins on separate tracks.

➡️ Train Bace with Your Voice
Bace’s software can be trained to recognize your voice.

➡️ Use Your Microphone
Bace works with your own dynamic microphone. No special equipment needed!

➡️ Plug-in & Standalone App
Open the AU or VST3 plug-in inside your DAW or use the Standalone app to control hardware and software. 


For more information about Bace visit https://bace.app/

3 Best AI Music Generators for MIDI Creation

A new generation of AI MIDI software has emerged over the past 5 years. Google, OpenAI, and Spotify have each published a free MIDI application powered by machine learning and artificial intelligence.

The MIDI Association reported on innovations in this space previously. Google’s AI Duet, their Music Transformer, and Massive Technology’s AR Pianist all rely on MIDI to function properly. We’re beginning to see the emergence of browser and plugin applications linked to cloud services, running frameworks like PyTorch and TensorFlow.

In this article we’ll cover three important AI MIDI tools – Google Magenta Studio, OpenAI’s MuseNet, and Spotify’s Basic Pitch MIDI converter. 

Google Magenta Studio 

Google Magenta is a hub for music and artificial intelligence today. Anyone who uses a DAW and enjoys new plugins should check out the free Magenta Studio suite. It includes five applications. Here’s a quick overview of how they work:

  • Continue – Continue lets users upload a MIDI file and leverage Magenta’s music transformer to extend the music with new sounds. Keep your temperature setting close to 1.0-1.2, so that your MIDI output sounds similar to the original input but with variations.
  • Drumify – Drumify creates grooves based on the MIDI file you upload. They recommend uploading a single instrumental melody at a time, to get the best results. For example, upload a bass line and it will try to produce a drum beat that compliments it, in MIDI format.
  • Generate – Maybe the closest tool in the collection to a ‘random note generator’, Generate uses a Variational Autoencoder (MusicVAE) and has trained on millions of melodies and rhythms within its dataset.
  • Groove – This nifty tool takes a MIDI drum track and uses Magenta to modify the rhythm slightly, giving it a more human feel. So if your music was overly quantized or had been performed sloppily, Groove could be a helpful tool.
  • Interpolate This app asks you for two separate MIDI melody tracks. When you hit generate, Magenta composes a melody that bridges them together.

The Magenta team is also responsible for Tone Transfer, an application that transforms audio from one instrument to another. It’s not a MIDI tool, but you can use it in your DAW alongside Magenta Studio.

OpenAI MuseNet 

MuseTree – Free Nodal AI Music Generator


OpenAI
is a major player in the AI MIDI generator space. Their Dalle 2 web application took the world by storm this year, creating stunningly realistic artwork and photographs in any style. But what you might not know is that they’ve created two major music applications, MuseNet and Jukebox.

  • MuseNet – MuseNet is comparable to Google’s Continue, taking in MIDI files and generating new ones. But users can constrain the MIDI output to parameters like genre and artist, introducing a new layer of customization to the process.
  • MuseTree – If you’re going to experiment with MuseNet, I recommend using this open source project MuseTree instead of their demo website. It’s a better interface and you’ll be able to create better AI music workflows at scale.
  • Jukebox – Published roughly a year after MuseNet, Jukebox focuses on generating audio files based on a set of constraints like genre and artist. The output is strange, to say the least. It does kind of work, but in other ways it doesn’t. The application can also be tricky to operate, requiring a Google Colab account and some patience troubleshooting the code when it doesn’t run as expected. 

Spotify Basic Pitch (Audio-to-MIDI)

Spotify’s Basic Pitch: Free Audio-To-MIDI Converter

Spotify is the third major contender in this AI music generator space. A decade ago, in 2013, they published a mobile-friendly music creation app called Soundtrap. So they’re no stranger to music production tools. As for machine learning, there’s already a publicly available Spotify AI toolset that powers their recommendation engine. 

Basic Pitch is a free browser tool that lets you upload any song as an audio file and convert it into MIDI. Basic pitch leverages machine learning to analyze the audio and predict how it should be represented in MIDI. Prepare to do some cleanup, especially if there’s more than one instrument in the audio. 

Spotify hasn’t published a MIDI generator like MuseNet or Magenta Studio’s Continue. But in some ways Basic Pitch is even more helpful, because it generates MIDI you can use right away, for a practical purpose. Learn your favorite music quickly!

 The Future of AI MIDI Generators

The consumer applications we’ve mentioned, like Magenta Studio, MuseTree, and Basic Pitch, will give you a sense of their current capabilities and limitations. For example, Magenta Studio and MuseTree work best when they are fed special types of musical input, like arpeggios or pentatonic blues melodies. 

Product demos often focus on the best use cases, but as you push these AI MIDI generators to their limits, the output becomes less coherent. That being said, there’s a clear precedent for future innovation and the race is on, amongst these big tech companies, to compete and innovate in the space.

Private companies, like AIVA and Soundful, are also offering AI music generation for licensing. Their user-friendly interfaces are built for social media content creators that want to license music at a lower cost. Users create an account, choose a genre, generate audio, and then download the original music for their projects.

Large digital content libraries have been acquiring AI music generator startups in recent years. Apple bought up a London company called AI Music in February 2022, while ShutterStock purchased Amper Sounds in 2020. This suggests a large upcoming shift in how licensed music is created and distributed.

At the periphery of these developments, we’re beginning to see robotics teams that have successfully integrated AI music generators into singing, instrument-playing, animatronic AI music robots like Shimon and Kuka. Built by the Center for Music Technology at Georgia Tech, Shimon has performed live with jazz groups and can improvise original solos thanks to the power of artificial intelligence. 

Stay tuned for future articles, with updates on this evolving software and robotics ecosystem. 

Solve Problems with MIDI Plug-Ins

A DAW’s MIDI Plug-Ins Can Provide Solutions to Common Problems


In a world obsessed with audio plug-ins, MIDI plug-ins may not seem sexy—but with MIDI’s continued vitality, they remain very useful problem solvers. For an introduction to MIDI plug-ins, please check out the article Why MIDI Effects Are Totally Cool: The Basics.

Although processing MIDI data has existed since at least the heyday of the Commodore-64, the modern MIDI plug-in debuted when Cakewalk introduced the MFX open specification for Windows MIDI plug-ins. Steinberg introduced a wrapper for MFX plug-ins, and also developed a cross-platform VST format. MIDI plug-ins run the gamut from helpful utilities that supplement a program like MOTU Digital Performer, to beat-twisting effects for Ableton Live. After Apple Logic Pro X added Audio Units-based MIDI plug-ins, interest continued to grow. Typically, MIDI plug-ins insert into MIDI tracks similarly to how audio plug-ins insert into audio tracks (Fig. 1). 

Figure 1: In Cakewalk by BandLab, you can drag MIDI plug-ins from the browser into a MIDI track’s effects inserts.

Unfortunately most companies lock MIDI plug-ins to their own programs. Therefore this article takes a general approach that describes typical problems you can solve with MIDI plug-ins, but note that not all programs have plug-ins that provide these functions, nor do all hosts support MIDI plug-ins.

Instant Quantization for Faster Songwriting

MIDI plug-ins are generally real-time and non-destructive (some can work offline as well). If you’re writing a song and craft a great drum groove that suffers from shaky timing, don’t dig into the quantization menu and start editing—insert a MIDI quantizing plug-in, set it for eighth or 16th notes, and keep grooving. You can always do the “real” edits later.

Create Harmonies, Map Drums, and Do Arpeggiations

If your host has a Transpose MIDI plug-in, it might do a lot more than audio transposition plug-ins—like transpose by intervals or diatonically, change scales in the process of transposing from one key to another, or create custom transposition maps that can map notes to drums. The image above shows a variety of MIDI plug-ins; clockwise from upper left is the Digital Performer arpeggiator, Live arpeggiator, Cubase microtuner, Live randomizer, Cubase step sequencer, Live scale constrainer, Digital Performer Transposer, Cubase MIDI Echo.

Filter Data

You’re driving two instruments from a MIDI controller, and want one to respond to sustain but not the other…or filter out pitch bend before it gets to one of the instruments. Data filtering plug-ins can implement these applications, but many can also create splits and layers. If the plug-in can save presets, you can instantly call up oft-used functions (like remove aftertouch data).

Re-Map Controllers

Feed your footpedal through a re-mapping plug-in to control breath control parameters, mod wheel, volume, aftertouch, and the like. There may also be an option to thin or randomize control data, or map data to a custom curve.

Process MIDI Data Dynamically

You can compress, expand, and limit MIDI data (to low, high, or both values). For example, a plug-in could specify that all values under a certain value adopt that value, or compress velocity dynamics by a ratio, like 2:1. While you don’t need a MIDI plug-in to do these functions (you can usually scale velocities, then add or subtract a constant using traditional MIDI processing functions), a plug-in is more convenient.

MIDI Arpeggiation Plug-Ins

Although arpeggiation isn’t as front and center in today’s music as it was when Duran Duran was tearing up the charts, it’s still valid for background fills and ear candy. With MIDI plug-in arpeggiator options like multiple octaves, different patterns, and rhythmic sync, arpeggiation is well worth re-visiting if you haven’t done so lately. Arpeggiators can also produce interesting patterns when fed into percussion tracks.

“Humanize” MIDI Parts so They Sound Less Metronomic

“Humanizer” plug-ins usually randomize parameters, like start times and/or velocities, so the MIDI timing isn’t quite so rigid. Personally, I think they’re more accurately called “how many drinks did the player have” because musicians tend not to create totally random changes. But taking a cue from that, consider teaming humanization with an event filter. For example if you have a string of 16th note hi-hat triggers, use an event filter to increase velocities that fall on the first note of a beat, and perhaps add a slight increase to the third 16th note in each series of four. Then if you humanize velocity slightly, you’ll have a part that combines conscious change with an overlay of randomness.

Go Beyond Traditional Echo

Compared to audio echo, MIDI echo can be far more flexible. Fig. 2 shows, among other MIDI plug-ins, Cakewalk’s MIDI Echo plug-in.

Figure 2: Clockwise from upper left, Logic Pro X Randomizer and Chord Trigger, Cakewalk Data Filter, Echo, and Velocity processor.

Much depends on a plug-in’s individual capabilities, but many allow variations on the echoes—change pitch as notes echo, do transposition, add swing (try that with your audio plug-in equivalent), and more. But if those options aren’t present, there’s still DIY potential because you can render the track with a MIDI plug-in, then tweak the echoes manually. MIDI echo makes it particularly easy to generate staccato, “dugga-dugga-dugga” synth parts that provide rhythmic underpinnings to many dance tracks; the only downside is that long, languid echoes with lots of repeats eat up synth voices.

Experiment with Adding Human “Feel”

A Shift MIDI plug-in shifts note start times forward or backward. This benefits greatly from MIDI plug-ins’ real-time operation because you can listen to the changes in “feel” as you move, for example, a snare hit ahead or behind the beat somewhat.

Remove Glitches

“De-glitcher” plug-ins remove duplicate events that hit on the same beat, filter out notes below a specific duration or velocity, “de-flam” notes to move the start times of multiple out-of-sync notes to the average start time, or other options that help clean up pollution from MIDI data streams.

Constrain Notes to a Scale, and Nuke Wrong Notes

Plug-ins that can snap to scale pull errant notes into a defined scale—just bash away at a keyboard (or have a cat walk across it), and there won’t be any “wrong” notes. Placing this after a randomizer can be very interesting, as it offers the benefits of randomness yet notes are always constrained to particular scales.

Analyze Chords

Put this plug-in on a track, and it will read out the kind of chord made by the track’s notes. With ambiguous chords, the analyzer may display all voicings it recognizes. Aside from figuring out exactly what you played when you had a spurt of inspiration, for those using MIDI backing tracks an analyzer simplifies figuring out chord progressions.

Add an LFO to Just About Anything

Being able to change MIDI parameters rhythmically can add considerable interest and animation to synth modules and MIDI-controllable signal processors. Although some DAWs let you draw in periodic waveforms (and you can always take the time to create a library of MIDI continuous controller signals suitable for pasting into programs), a Continuous Controller generator provides these same functions in a much more convenient package.

The above functions are fairly common—but scratch beneath the surface, and you’ll find all kinds of interesting MIDI plug-ins, either bundled with hosts or available from third parties. Midiplugins.com lists MIDI plug-ins from various companies. Some of the links have disappeared into internet oblivion and some belong to zombie sites, but there are still plenty of potentially useful MIDI effects. More resources are available at midi-plugins.de, (the most current of the sites), and tencrazy.com. Happy data diving!

How to Create Polyrhythmic MIDI Echoes

 There’s more to life than audio echo – like MIDI echo

Although the concept of MIDI echo has been around for years, early virtual instruments often didn’t have enough voices to play back new echoes without stealing voices from previous echoes. With today’s powerful computers and instruments, this is less of a problem – so let’s re-visit MIDI echo.


Copy and Drag MIDI Tracks 

It’s simple to create MIDI echo: Copy your MIDI track, and then drag the notes for the desired amount of delay compared to the original track. Repeat for as many echoes as you want, then bounce all the parts together (or not, if you think you’ll want to edit the parts further). In the screen shot above, the notes colored red are the original MIDI part, the blue notes are delayed by an eighth note, and the green notes are delayed by a dotted-eighth note. The associated note velocities have also been colored to show the velocity changes for the different echoes.


 Change Note Velocities for More Variety

But wait—there’s more! You can not only create polyrhythmic echoes, but also change velocities on the different notes. Although the later echoes can have different dynamics, there’s no law that says all the changes must be uniform. Nor do you have to follow the standard “rules” of echo—consider dragging very low-velocity notes ahead of the beat to give pre-echo.


 MIDI Plug-Ins for Echo

 Some DAWs that support MIDI plug-ins offer MIDI echo, which sure is convenient. Even if your doesn’t, though, you can always create them manually, as described above. The bottom line is that there are many, many possibilities with MIDI echo…check them out.

Mixing with Virtual Instruments: The Basics

DAW software, like Ableton Live, Logic, Pro Tools, Studio One, etc. isn’t just about audio. Virtual instruments that are driven by MIDI data produce sounds in real time, in sync with the rest of your tracks. It’s as if you had a keyboard player in your studio who played along with your tracks, and could play the same part, over and over again, without ever making a mistake or getting tired.

MIDI-compatible controllers, like keyboards, drum pads, mixers, control surfaces, and the like, generate data that represents performance gestures (fig. 1). These include playing notes, moving controls, changing level, adding vibrato, and the like. The computer then uses this data to control virtual instruments and effects.

Figure 1: Native Instruments’ Komplete keyboards generate MIDI data, but can also edit the parameters of virtual instruments.

Virtual Instrument Basics

Virtual instruments “tracks” are not traditional digital audio tracks, but instrument plug-ins, triggered by MIDI data. The instruments exist in software. You can play a virtual instrument in real time, record what you play as data, edit it if desired, and then convert the virtual instrument’s sound to a standard audio track—or let it continue to play back in real time.

Virtual instruments are based on computer algorithms that model or reproduce particular sounds, from ancient analog synthesizers, to sounds that never existed before. The instrument outputs appear in your DAW’s mixer, as if they were audio tracks.

Why MIDI Tracks Are More Editable than Audio Tracks

Virtual instruments are being driven by MIDI data, so editing the data driving an instrument changes a part. This editing can be as simple as transposing to a different key, or as complex as changing an arrangement by cutting, pasting, and processing MIDI data in various ways (fig. 2).

Figure 2: MIDI data in Ableton Live. The rectangles indicate notes, while the line along the bottom show the dynamics for the various notes. All of this data is completely editable.

Because MIDI data can be modified so extensively after being recorded, tracks triggered by MIDI data are far more flexible than audio tracks. For example, if you record a standard electric bass part and decide you should have played the part with a synthesizer bass instead, or used the neck pickup instead of the bridge pickup, you can’t make those changes. But the same MIDI data that drives a virtual bass can just as easily drive a synthesizer, and the virtual bass instrument itself will likely offer the sounds of different pickups.

How DAWs Handle Virtual Instruments

Programs handle virtual instrument plug-ins in two main ways:

  • The instrument inserts in one track, and a separate MIDI track sends its data to the instrument track.
  • More commonly, a single track incorporates both the instrument and its MIDI data. The track itself consists of MIDI data. The track output sends audio from the virtual instrument into a mixer channel.

Compared to audio tracks, there are three major differences when mixing with virtual instruments:

  • The virtual instrument’s audio is typically not recorded as a track, at least initially. Instead, it’s generated by the computer, in real time.
  • The MIDI data in the track tells the instrument what notes to play, the dynamics, additional articulations, and any other aspects of a musical performance.
  • In a mixer, a virtual instrument track acts like a regular audio track, because it’s generating audio. You can insert effects in a virtual instrument’s channel, use sends, do panning, automate levels, and so on.

However, after doing all needed editing, it’s a good idea to render (transform) the MIDI part into a standard audio track. This lightens the load on your CPU (virtual instruments often consume a lot of CPU power), and “future-proofs” the part by preserving it as audio. Rendering is also helpful in case the instrument you used to create the part becomes incompatible with newer operating systems or program versions. (With most programs, you can retain the original, non-rendered version if you need to edit it later.)

The Most Important MIDI Data for Virtual Instruments

The two most important parts of the MIDI “language” for mixing with virtual instruments are note data and controller data.

  • Note data specifies a note’s pitch and dynamics.
  • Controller data creates modulation signals that vary parameter values. These variations can be periodic, like vibrato that modulates pitch, or arbitrary variations generated by moving a control, like a physical knob or footpedal.

Just as you can vary a channel’s fader to change the channel level, MIDI data can create changes—automated or human-controlled—in signal processors and virtual instruments. These changes add interest to a mix by introducing variations.

Instruments with Multiple Outputs

Many virtual instruments offer multiple outputs, especially if they’re multitimbral (i.e., they can play back different instruments, which receive their data over different MIDI channels). For example, if you’ve loaded bass, piano, and ukulele sounds, each one can have its own output, on its own mixer channel (which will likely be stereo).

However, multitimbral instruments generally have internal mixers as well, where you can set the various instruments’ levels and panning (fig. 3). The mix of the internal sounds appears as a stereo channel in your DAW’s mixer. The instrument will likely incorporate effects, too.

Figure 3: IK Multimedia’s SampleTank can host up to 16 instruments (8 are shown), mix them down to a stereo output, and add effects.

Using a stereo, mixed instrument output has pros and cons.

  • There’s less clutter in your software mixer, because each instrument sound doesn’t need its own mixer channel.
  • If you load the instrument preset into a different DAW, the mix settings travel with it.
  • To adjust levels, the instrument’s user interface has to be open. This takes up screen space.
  • If the instrument doesn’t include the effects plug-ins needed to create a particular sound, then use the instrument’s individual outputs, and insert effects in your DAW’s mixer channels. (For example, using separate outputs for drum instruments allows adding individual effects to each drum sound.)

Are Virtual Instruments as Good as Physical Instruments?

This is a question that keeps cropping up, and the answer is…it depends. A virtual piano won’t have the resonating wood of a physical piano, but paradoxically, it might sound better in a mix because it was recorded with tremendous care, using the best possible microphones. Also, some virtual instruments would be difficult, or even impossible, to create as physical instruments.

One possible complaint about virtual instruments is that their controls don’t work as smoothly as, for example, analog synthesizers. This is because the control has to be converted into digital data, which is divided into steps. However, the MIDI 2.0 specification increases control resolution dramatically, where the steps are so minuscule that rotating a control feels just like rotating the control on an analog synthesizer.

MIDI 2.0 also makes it easier to integrate physical instruments with DAWs, so they can be treated more like virtual instruments, and offer some of the same advantages. So the bottom line is that the line between physical and virtual instruments continues to blur—and both are essential elements in today’s recordings.

Ableton-May is MIDI Month Platinum Sponsor

We make Live, Push and Link — unique software and hardware for music creation and performance. With these products, our community of users creates amazing things.
Ableton was founded in 1999 and released the first version of Live in 2001. Our products are used by a community of dedicated musicians, sound designers, and artists from across the world.

Making music isn’t easy. It takes time, effort, and learning. But when you’re in the flow, it’s incredibly rewarding.We feel the same way about making Ableton products. The driving force behind Ableton is our passion for what we make, and the people we make it for.


Song Maker Kit

The ROLI Songmaker Kit is comprised of some of the most innovative and portable music-making devices available. It’s centered around the Seaboard Block, a 24-note controller featuring ROLI’s acclaimed keywave playing surface. It’s joined with the Lightpad Block M touch controller, and the Loop Block control module, for comprehensive control over the included Equator and NOISE software. Complete with a protective case, the ROLI Songmaker Kit is a powerful portable music creation system.

The Songmaker Kit also included Ableton Live Lite and Ableton is also a May MIDI Month platinum sponsor. 


Roli and Ableton Live Lite

This YouTube video show how to use Roli blocks with Ableton Live. 


Brothers Marco and Jack Parisi recreate a Michael Jackson classic hit

 

Electronic duo PARISI are true virtuosic players of ROLI instruments, whose performances have amazed and astounded audiences all over the world — and their latest rendition of Michael Jackson’s iconic pop hit “Billie Jean” is no exception.

EasyController – By 42Percent Noir

Easily connect your MIDI with Unity, Max/MSP and openFramework – and control them all at the same time

EasyController is a standalone virtual tool for live performance. We use it in our audio-visual performance to map our midi controllers and presets while we are juggling between Unity, Max/MSP, and openFramework.

For example, it enables us to control our visual in Unity and the audio in Max/MSP, allowing back and forth communication between the software and the Midi controller using a virtual representation.

This gives us a standard and easy way of programming the midi gestures, and importantly to focus on the creative development and enjoy the live performance.

EasyController is available for free on MacOS and can be downloaded from our website – 42noir.com/es

And a short tutorial is available on our Youtube – https://youtu.be/ptsnJpHuKZ8

Hope you’ll enjoy this one, Shalti & Gil from 42Percent Noir – 42noir.com

The New Roland Cloud: ZENOLOGY Software Synthesizer, New Membership Plans, and Lifetime Keys

Earlier this month, Roland unveiled the biggest enhancements yet to the Roland Cloud platform. These include the introduction of the ZENOLOGY software synthesizer, new membership options, and the ability to buy Lifetime Keys to individual instruments.

Since its inception, Roland Cloud has grown into a collection of more than 50 instruments—including Roland legends like the TR-808, JUPITER-8, and JUNO-106—each giving users instantly inspiring, genre-defining sounds from the past, present, and future of music. 


Introducing the New ZENOLOGY Software Synthesizer 

 Roland’s ZEN-Core Synthesis System—found in the Roland FANTOM and JUPITER-X synthesizers, MC Series Grooveboxes, and the RD-88 Digital Piano—is now available for use in DAWs with the new ZENOLOGY Software Synthesizer, available only in Roland Cloud. Users can utilize the same sounds in both DAW and hardware instruments, create custom banks, and share with friends and collaborators. Anyone with an active Roland Account can access ZENOLOGY Lite.



New Membership Plans


Roland Cloud now offers three membership plans: Core, Pro, and Ultimate. Core ($29.99/year or $2.99/month) includes access to the ZENOLOGY Software Synthesizer and ZEN-Core sound packs.

Pro ($99.00/year or $9.99/month) gives unlimited access to the TR-808, D-50, and ZENOLOGY Pro (coming fall 2020). Pro also includes all Zen-Core Sound Packs, Wave and Model Expansions for software, plus Anthology, TERA, FLAVR, Drum Studio collections, and all software patches and patterns.

Ultimate ($199.00/year or $19.99/month) includes all Legendary and SRX collections plus unlimited access to all instruments and sounds. 


Lifetime Keys to Individual Roland Cloud Instruments 

Users can also purchase Lifetime Keys to Roland Cloud instruments like the TB-303, TR-909, JX-3P, and many others. These Lifetime Keys provide unrestricted access to a single Roland Cloud software instrument for as long as a Roland Account remains active. 

Experience Roland Cloud by downloading Roland Cloud Manager 2.5:  

Yamaha and Camelot Pro make playing live easier

LIVE PERFORMING IS NOW MORE FUN AND EASY

CROSS PLATFORM LIVE PERFORMANCE APPLICATION

Wondering how to connect and control your hardware and software instruments in one place? Want to remotely control your Yamaha synthesizers and quickly recall presets on stage? How about attaching a lead sheet or music score with your own notes to a set of sounds?

Camelot Pro and Yamaha have teamed up with special features for Yamaha Synth owners.

REGISTER AND GET CAMELOT PRO FOR MAC OS OR WINDOWS

Download your Camelot Pro copy now with a special offer for Yamaha Synth owners: try the full version FREE for three months with an option to purchase for 40% off.

The promo is valid from:

The promo is valid from:

October 1, 2019 to September 30, 2020..

Upgrade your live performance experience to the next level:

  • Build your live set list with ease
  • Manage your Yamaha instruments using smart maps (no programming skills required!)
  • Combine, layer and split software instruments with your Yamaha synths
  • Get rid of standard connection limits with Camelot Advanced MIDI routing
  • Attach music scores or chords to any scene

The real slick thing about the combination of the Yamaha synths and Camelot Pro is that it allows you to very easily integrate your hardware synths and VST/AU plugins for live performance.   The Yamaha synths connect to your computer via USB and integrate digital audio and MIDI.  So just connect your computer to your Yamaha synth and then your Yamaha synth to your sound system.  Camelot allows you to integrate your hardware and software in complex splits and layers and everything comes out the analog outputs of your Yamaha synth. 



Camelot Pro Key Features 


Camelot Pro Tutorial: The Definitive Guide


Camelot Pro Tutorial: MIDI Connections


Camelot Pro Tutorial: Managing Any MIDI Device


Yamaha Hardware List

Integrate VST?AU software instruments

Add song notation

Advanced MIDI Routing

Compatible with MAC/PC and iPAD


Don’t own a Yamaha Synth?  

 No problem,  Camelot Pro works with lots of synths. You can check the hardware list here. 

https://camelotpro.com/hardware-instruments/


Try it for free 

 There is even a free version of Camelot that you can download just for signing up for the Camelot Newsletter. 

FL Studio- MIDI Recording and Editing

Here are three new videos about how to use MIDI in FL Studio




20% Off Online Video Courses

This month get 20% off any TMA curriculum. Choose a monthly or an annual subscription and save even more! Supercharge your music production skills today.


...

Massive Online Courseware Library : MIDI Association : NonLinear Educating

Nonlinear Educating is an adaptive technology company dedicated to improving the way the world learns. The combination of our powerful and modular video-courseware production & distribution platform and our extensive library of industry leading training courses, has granted us the opportunity to empower a variety of partners from a multitude of industries. The foundationally modular approach to our application infrastructure enables us to rapidly customize instances of our platform to meet the specific needs of our partners. We are agile adaptive and are committed to developing the most efficient and robust video-learning platform on the internet.

Cubase 10 MIDI Recording and Editing

Here are three new videos about how to use MIDI in Cubase 10 




20% Off Online Video Courses

This month get 20% off any TMA curriculum. Choose a monthly or an annual subscription and save even more! Supercharge your music production skills today.


...

Massive Online Courseware Library : MIDI Association : NonLinear Educating

Nonlinear Educating is an adaptive technology company dedicated to improving the way the world learns. The combination of our powerful and modular video-courseware production & distribution platform and our extensive library of industry leading training courses, has granted us the opportunity to empower a variety of partners from a multitude of industries. The foundationally modular approach to our application infrastructure enables us to rapidly customize instances of our platform to meet the specific needs of our partners. We are agile adaptive and are committed to developing the most efficient and robust video-learning platform on the internet.

DUBLER STUDIO KIT: Your voice, the ultimate MIDI controller.

Dubler Studio Kit is a real-time vocal recognition MIDI controller. 

Vochlea Music recently launched a Kickstarter for Dubler Studio Kit and have raised over double their goal of $53,000 and still have 29 days left in their campaign. 

The Kit consists of both hardware and software. 

  • The Dubler software — a virtual MIDI instrument ( a desktop application for Mac + PC) Compatible with any production software [DAW]. It is not a plugin or VST.
  • The Dubler microphone — a low latency custom USB mic, tuned for the Dubler software.

As musicians we all sing, hum, and record voice memos to track snippets of ideas – but that’s often where that idea ends, never making it into the studio or onto the stage.

Our goal: To help you to release the stems of musical ideas trapped inside your head and get them directly into your production software— simply by vocalising them.

Now anyone can turn their voice directly into MIDI— quickly, easily, intuitively and LIVE.

by Vochlea

Dubler Studio Kit Features 

  • Compatible with any DAW (Ableton/Logic/Reason/FL Studio/ProTools/GarageBand etc).
  • Learns your voice in less than 60 seconds.
  • Allows you to use your voice as a live MIDI controller.
  • Live pitch tracking for synth control.
  • Accurately select between, and trigger, up to 8 samples using your voice.
  • Sustain sounds, samples and notes vocally.
  • Responsive to changes in velocity — takes all the information from exactly how you make a sound.
  • Simultaneously talks to multiple MIDI channels— enabling sample triggering and synth control at the same time.
  • Control up to 4 CC [MIDI mapping] values based on the way you make a sound. Then easily map to anything from synth selection, effects controls, synth blending, filters and more.
  • Additional control of Pitch Bend and Envelope Following.
  • Works with non-vocal sounds too — clap a beat or mic up an instrument.
  • Can be used to control effects and filters on other MIDI devices and instruments.
  • Low latency [10-12ms] enabling real-time, live control.

...

Vochlea Music

Live vocal MIDI control is arriving for artists and producers in 2019. Select, trigger and manipulate samples and instruments with our Dubler microphone and app. Compatible with any DAW. Launching March 2019. ?

Other apps that convert voice to MIDI 

There are other apps that do very similar things to Dubler.  In fact, most DAWs allow you to convert monophonic audio  tracks into MIDI. 


...

How to turn a vocal recording into MIDI in a DAW | MusicRadar

Got a melody in your head but don’t know how to transcribe it as MIDI note data? Your DAW might be able to do it for you…


...

imitone: mind to melody

play any instrument with your voice.
explore and create music with only a microphone.


...

HumBeatz

HumBeatz is the revolutionary music making application that allows you to hum or beatbox and turn it into the musical instrument of your choice. Now you can quickly create musical parts and song sketches with just your mouth!

Audiobus 3 adds MIDI Learn feature

Audiobus 3 update adds MIDI learn functionality

Last year Audiobus added MIDI pipelines to Audiobus 3 to allow MIDI data to be routed between apps in three different ways as inputs, effects or outputs. 



The new MIDI learn function will make it easier to create really complex routings to external MIDI controllers. 



...

Audiobus-compatible apps

Here is a list of MIDI apps that are compatible with AudioBus

Cubasis 2.6 adds new MIDI features/Roli Blocks integration

Cubasis 2.6 Overview

Synonymous with ease of use, Cubasis 2 is a powerful and fully featured iOS-based music production system that pushes the creative envelope. Whether you’re capturing simple ideas or musical masterpieces, Cubasis comes with outstanding, touch-optimized tools for recording, editing, mixing and sharing your music with the world right away. With its second iteration, Cubasis boasts many additions such as real-time time-stretching and pitch-shifting, a studio-grade channel strip, pro- sounding effects, massive instrument refills, a refreshed MIDI Editor and many other great features. Put your hands on three onboard instruments, numerous loops and instrument sounds to creatively lift your music to perfection, together with the included mixer and effects. Once recorded, transfer it directly to Cubase or share your music directly with the world. 

But what’s really interesting is how many new MIDI features Cubasis 2.6 has. 

New features in Cubasis 2.6 

Audio Unit full-screen support*

Tweak sounds and parameters with utmost accuracy, using Cubasis’ super-sized full-screen support for Audio Unit instruments and effects plug-ins. Enjoy maximum productivity, creativity and flexibility, switching between the available screen sizes at lightning speed with only a few taps.

ROLI NOISE Seaboard and Drum Grid visualizer support*

Experience a new approach to making music, using ROLI’s free downloadable NOISE app within Cubasis. Create inspiring drum and melody parts through intuitive gestures, using the unique Seaboard and Drum Grid visualizers, now directly accessible via Cubasis’ Audio Unit full-screen mode.

MIDI CC support for compatible Audio Unit effect plug-ins*

Easily remote control your favorite compatible Audio Unit effect plug-ins via external controllers. No matter if you’re moving effect knobs via MIDI Learn or switching presets via program change — if your Audio Unit effects plug-in supports it, it can be done in Cubasis with great ease. 

*Requires iOS 11 


Check out what you can do with Cubasis 2.6 and Roli. 


Here is an in-depth tutorial on Cubasis and Roli Blocks 


...

‎Cubasis 2 on the App Store

‎Read reviews, compare customer ratings, see screenshots, and learn more about Cubasis 2. Download Cubasis 2 and enjoy it on your iPhone, iPad, and iPod touch.


...

Start | Steinberg

Get fascinated by the brand new features that Cubasis 2 comes with such as real-time time-stretch and pitch shift, a studio-grade channel strip, Spin FX, massive instrument refills and many more powerful features.

Audioswift-Your Trackpad As A MIDI Controller

Control · Improve · Create





Slider

Divide the trackpad in 1 to 4 virtual sliders and send CC or Pitch Bend MIDI messages. Add expressiveness to virtual instruments or automate plugins parameters easily with a trackpad. Edit photos faster in Lightroom.





XY

Use your trackpad as an XY pad to control several parameters at the same time, using one, two and three fingers configurations. A great MIDI tool for mobile producers and sound designers. 




Mixer

Control one or two faders at the same time using simple touches. Move the panning, set the send’s levels, use your trackpad as a jog wheel, and write automation in a quick and easy way. It’s currently supported in Logic Pro, Pro Tools, Ableton Live, Reaper, Cubase and Studio One.





Trigger

Make quick beats using your trackpad as trigger pads. Play audio clips by tapping your fingers. Up to three fingers can be used at the same time. 




Scale

Choose a tonic note and then select a scale. Slide your fingers from left to right to play notes in the selected key. Apply pressure to the trackpad and it will send aftertouch MIDI messages. (Aftertouch requires a trackpad with Force Touch). 


AudioSwift (US$24) requires macOS 10.11 or newer. Get 50% discount using coupon GOLDENFROG50 for a limited time. 

Telemidi – Creating music over The Internet in real-time

What is Telemidi?

A system of connecting two DAW environments over the internet, to achieve real-time musical `jamming’.
The product of Masters research by Matt Bray.


“…a musician’s behaviour at one location will be occurring at the other location in a near synchronous manner, and vice versa, thus allowing for a `jam’ like atmosphere to be mutually shared.”

Matt Bray (Telemidi creator)

Telemidi is an approach to Networked Music Performance (NMP) that enables musicians to co-create music in real-time by simultaneously exchanging MIDI data over The Internet.  Computer networking brings with it the factor of latency (a delay of data transfer), the prevalent obstacle within NMP‘s, especially when attempting to match the interaction of traditional performance ensembles.  Telemidi accommodates for latency via the use of numerous Latency Accepting Solutions (LAS – identified below) embedded within two linked DAW environments, to equip performers with the ability to interact in a dynamic, interactive and ongoing musical process (jamming).  This is achieved in part by employing RTP (Real Time Protocol) MIDI data transfer systems to deliver performance and control information over The Internet from one IP address to another in a direct P2P (peer to peer) fashion.  Once arriving at a given IP address, MIDI data is then routed into the complex DAW environment to control any number of devices, surfaces, commands and performance mechanisms.  Essentially, a musician’s behaviour at one location will be occurring at the other location in a near synchronous manner, and vice versa, thus allowing for a `jam’ like atmosphere to be mutually shared.  As seen in the video listed below, this infrastructure can be applied to generate all manner of musical actions and genres, whereby participants readily build and exchange musical ideas to support improvising and composing (`Comprovising’).  Telemidi is a true Telematic performance system. 


What is Telematic Performance?

Telematic music performance is a branch of Network Music Performance (NMP) and is a rapidly evolving, exciting field that brings multiple musicians and technologies into the same virtual space. Telematic Performance is the transfer of data and performance information over significant distances, achieved by the explicit use of technology. The more effective the transfer the greater the sense of Telepresence, the ability of a performer to “be” in the space of another performer.  Telematic performances first appeared when Wide Area Networking (WAN) options presented themselves for networked music ensembles via technologies such ISDN telephony, and options increased alongside the explosion of computer processing and networking developments that gave rise to The Internet.  Unfortunately in this global WAN environment, latency has stubbornly remained as a constant and seemingly unavoidable obstruction to real-time ensemble performance.

Telematic performance has been thoroughly explored by countless academic, commercial and hobby entities over the last four decades with limited successes. The musical performances have taken many forms throughout the exponential development of computing technologies, yet have been more-or-less restricted by latency at every turn.  For example, there is the inherent latency of a CPU within any given DAW, the additional processing loads of soft/hardware devices, the size and number of data packages generated in a performance, and the delivery of this data over The Internet which in turn presents issues regarding available bandwidth, data queuing, WiFi strength etc.. This is but one side of the engagement as we also have the DAW requirements of the reciprocating location, and of course the need for synchronous interplay between the two. Real-time NMPs suffer at the whim of network jitter, data delays and DAW operations.


How Telemidi Works

Telemidi works by exchanging MIDI data in a duplex fashion between the IP addresses of two performers, each of whom are running near-identical soft/hardware DAW environments.  A dovetailed MIDI channel allocation caters for their respective actions while avoiding feedback loops, in a system with the potential to deliver performance information to and from each location in near real-time (10-30ms).

To achieve this musical performance over The Internet, the Telemidi process employed:

1 – Hardware – a combination of control devices

2 – Softwaretwo near-identical Ableton Live sets

3Latency Accepting Solutions (LAS) – ten examples

4 – RTP MIDI facilitating the delivery of MIDI data to a WAN.  

Click on the tabs below for a summary of items used at each node location during the research stage of the Telemidi research (for more information and to download the Masters thesis go to www.telemidi.org): 

 Below is a list of hardware used at each location in the Telemidi research:

Lap-top Computers:  + Mac and Windows computers used, demonstrating Telemidi accessibility.


Novation SL Mk II

Novation SL Mk II MIDI controller keyboard


+ High capacity for customised MIDI routing (both control and performance data)

+ Traditional musical interface (keyboard)


Novation LaunchPad Pro

Novation LaunchPad Pro


+ Native integration with Ableton Live

+ Contemporary `Grid-based’ composition process 

 Software

 LAS

Ableton Live 

Near-identical Live sets (duplex architecture)
7 pre-composed songs (each split into four sections, A, B, C & D)
54 additional percussion loop patterns
12 x Synth Instruments (Native and 3rdparty)
Synths: 4 each of Bass/Harmony/Lead
16 DSP effects processers (with 2 or more mapped parameters)
286 interleaved MIDI mappings within each Live set
13 of 16 MIDI Channels used for shared performance and control data
Tempo variation control
Volume & start/stop control for each voice (Bass, Harmony & Melody)
Record and Loop capacity for each voice (Bass, Harmony & Melody)

LATENCY ACCEPTING SOLUTIONS (LAS):

The following processes adapt to and overcoming (cumulatively) the obstacle of latency.  They are ranked in order of efficiency from 1 (most efficient) to 10 (least efficient).

LATENCY ACCEPTING SOLUTION JUSTIFICATION
1 – One Bar Quantisation All pre-composed, percussive and recorded loops are set to trigger upon a one bar quantization routine, allowing time (2000ms @ 120bpm) to accommodate for network latency between song structure changes (most commonly occurring on a 4 to 8 bar basis).
2 – P2P (Peer ) Network Connection: Direct delivery of MIDI data from one IP address to the other. A simple direct delivery. No third party `browser-based’ servers used to calibrate message timing.
3 – Master Slave Relationship:  One node (Alpha) was allocated the role of Master and the other (Beta) the role of slave, allowing for consistent, shared tempo and a self-correcting tempo alignment following any network interference.
4 – Pulse-based music (EDM) as chosen genre for performance:

A genre without reliance on a strict scored format, rather a simple and repetitive pulse.
5 – Floating Progression (manner of Comprovising ideas) Each performer initiates an idea or motif, the other responds accordingly and vice-versa (jamming), any artefacts of latency only play into this process.
6 – 16thNote Record Quantize

Inbuilt Ableton function ensuring any recorded notes quantized to the grid.
7 – MIDI Quantize

3rdparty Max4Live device (16th note) puts incoming WAN MIDI onto the grid of the receiving DAW.
8 – Manual Incremental Tempo Decrease In the event of critical latency interference, tempo can be reduced incrementally, thus extending the time between each new bar and granting time for the clearance of latency issues.
9 – Kick drum (bar length loops) During a period of critical latency interference, a single bar loop of ¼ note kick drum events is triggered to maintain the “genre”.
10 – Stop Buttons During any period of critical latency interference, each voice (beats, percussion, bass, harmony or melody) can be stopped individually to reduce the musical texture, or to stop harmonic dissonance and stuck notes.

RTP MIDI

+ MacOS – AppleMIDI accessed through `Audio MIDI Setup’

+ Windows – rtpMIDI software used (created by Tobias Erichsen)

Success of Performance

Two performances were undertaken in the Telemidi research, the first with each performer 7.5km (4.6 mi) apart, and the second 2,730km (1,696 mi) apart.  Both were recorded and then analysed in detail (see video below), whereby aspects of performance parameters and methods were identified alongside several fundamental principles of Telematic performance.  A stream of audio is generated from each node and each has been analysed in the video to identify the interplay between the two musicians, highlighting any variations in the music created and to recognize artefacts of network performance.  It was noted that the music generated at each node was strikingly similar, although subtle variations in the rhythmic phrasing of bass, harmony and melody were common.

The Telemidi system ably accommodates all but the most obtrusive latency yet provides each musician with the capacity to co-create and Comprovise music in real-time across significant geographic distances.  These performances showed constant interplay and the exchange of musical ideas, as can be seen in the 16 minute analysis video below, leaving the door open for many exciting possibilities in the future.


16min Video Analysis


Future Plans

The principles of Telemidi were the focus of Matt Bray in his 2017 Masters research.  Now the Telemidi process has been proven to function, the landscape is open to allow for musicians to create and interact with each other in real-time scenarios regardless of their geographic locations.

The next steps are to:

+ Recruit keen MIDI-philes from around the globe to share and exchange knowledge in regards to the potentials of the Telemidi process (if this is you, please visit www.telemidi.org and leave a message)

+ Identify the most stable, low latency connections to The Internet available, to begin test performances across greater geographic regions

+ Refine and curate the infrastructure to suit various genres (from EDM to contemporary, also including live vocalists/musicians at each location)

+ Produce and promote simultaneous live performance events in capital cities, first nationally (Australia) and then internationally.

If you are at all interested in contributing to, or participating in the Telemidi process, please contact me, Matt Bray at www.telemidi.org, I’d love to hear from you and see what possibilities are achievable. 

Thanks for checking out Telemidi!!

Matt Bray


MIDI Tool Integrates Real and Virtual Synths

The Live Performance Challenge

The challenge for me as a performing musician has always been not being able to readily access desired sounds and sound layers during a live performance in an effective way. The tools given to us on modern stage keyboards are difficult to manage on the live stage, and even if you take the time and make the effort to program your performance, you are always restricted to controlling only what is inside each of the two or more keyboards you bring to the gig. It is difficult to get them to talk to each other, especially since every song might require a different setup. So being a hardware and software engineer by trade, I set out to solve some of these problems the last time I was gigging, and came up with a solution I love, that I will describe for you in this post. Better yet, my company just released the latest version of this jewel, including a freeware version for those minimalists that just need the basics. Let’s dig into what this app can do.

Familiar Tools – a Different Way

As performing keyboardists, we are familiar with the concept of layering multiple sounds (patches) to produce a phatter sound, splitting patches over your keys (keyboard split) to play different sounds on sections of your keyboard, and combinations and variations of these techniques, including velocity layering, transposition and more. These types of capabilities have been around since the early days of digital synthesis back in the 1980’s, and the modes that allow you to create and save these settings are referred to by different names depending on the instrument manufacturer. Common names are “Combination” (combi) or “Scene”, so let’s use the name “scene” on this post. Once you have programmed and saved a scene that has the mix of sounds you want on your instrument, you can easily call it up using a keypad, a touch screen, or some other means.

Without these functions you are limited to calling up single patches on each keyboard synth. This may be enough for you if you play using a small set of sounds like piano, organ and strings on your entire performance. However, in my opinion, these functions are essential in a live performance, allowing you to produce fresh, more complex and greater variety of sounds on each song, to come closer to the sound of your cover songs – if you’re in a cover or tribute band – or closer to your original sound if you recorded originals in a studio.

The Tools Have Hard Boundaries 

The problem with that is that you can only create scenes within the confines of an integrated instrument. You cannot easily accomplish this if you need to mix sounds from different synths, especially if some of these are virtual instruments on a computer or tablet. Each discrete instrument must be set up individually, either manually or through some external control, in order to call up scenes during a live performance, and this challenge can overwhelm a performer. This discourages the keyboard musician from using all the capabilities available from her instrument when playing live, often resulting in a duller performance.

When I go out to listen to musicians around town, no matter how great the musicians are, I often notice that, because of the limitations I eluded to, they just keep reusing the same sounds on every song, not even changing the tone of the guitar(s). After a while, every song sounds like the previous, and the performance descends into complacent drudge. And the reality is that when you are performing live, especially in a club where music must go on without breaks, it’s difficult to manage complex changes to your gear on the fly to address this issue. It’s somewhat easier in a concert where the audiences may be more forgiving and can be satisfied with some chit-chat between songs.

Breaking Down the Barriers – The Matrix 

Having encountered these problems and frustrations myself, I set out to resolve them when I joined a band again about ten years ago after a long career in electronics and software development. The solution was to design a software-driven MIDI matrix of inputs and outputs, such that you can totally separate the control signals from the inputs of the synths. By connecting the MIDI output of each of your controllers (keyboards, trigger pads, control surfaces, etc.) to the MIDI inputs of the matrix and then also connecting all your sound sources’ MIDI inputs to the outputs of the matrix, you can reconfigure the routing of your playing from any of your keyboards onto any of the synths connected to the matrix, including virtual instruments in your computer, instantaneously and on the fly.With this type of setup we can add functionality to the software-driven matrix to facilitate features such as transpositions, chord mapping, note followers, interval generators, continuous controller (CC) filtering, CC translation, and the like, to create a central device where you can set up scenes that use, not one instrument or synth, but any and all synths you have available on your setup, including virtual ones in your notebook or mobile device. The illustration shows the apparent simplicity of the concept.

Of course the matrix alone does not solve the problems until you couple it with a smart and well-thought-out user interface that can, to start with, allow you to save and recall the configurations you create for the matrix. Once you have established these configurations, this is where the fun begins, as you are now able to create layers, splits and transpositions from ANY of your synths, not just within a keyboard workstation, and you can treat your entire setup as if it were a single digital instrument.

You Decide… The Blue Pill or the Red Pill

In the movie The Matrix, if Neo takes the blue pill, he wakes up in his bed. If he takes the red pill, Morpheus shows him “how deep the rabbit hole goes” in the Matrix. Did I get this backwards? It doesn’t matter. If you want to keep the status quo in your performances, take the blue pill, and this blog post goes away. If you want to learn more about the possibilities, let’s take the red pill and examine the matrix!

So how do we realize this MIDI matrix and all its capabilities? Well, today’s computers can process audio at breakneck speed with virtually unnoticeable latency. If a computer can process audio this efficiently, it certainly can run circles around MIDI data. So a laptop, such as many musicians take to the gig, or even a tablet, is an ideal environment to create the most flexible and feature-rich MIDI matrix you can imagine. Dedicated hardware is not necessary. All you need is a MIDI interface (or several MIDI interfaces) with enough ports to connect each of your multitimbral synths, keyboard controllers and control surfaces to your laptop, and optionally, an internal MIDI bridge utility to route MIDI to the virtual synths that you want to use live on the same laptop. If all your devices use class compliant MIDI over USB, you may not even need a separate MIDI interface to do this.

To fully integrate your already integrated workstations to this type of setup, you simply need to put the workstation in LOCAL: OFF mode, at which point the keyboard becomes just another controller on the matrix, and the internal synth engine becomes another engine available for you to use in your integrated setup. In case that confused you a bit, I really mean that, with the matrix, you can set up scenes where you control instrument B from the keyboard of instrument A.

Convinced yet? Where is that red pill?

Midi~Kuper – The Red Pill 

After several years of prototyping, field testing and improvements, my company (mu-C Kuper) finally released the first commercial version of this concept. We call it Midi~Kuper.

Midi~Kuper implements the matrix I discussed, including the ability to split, layer and transpose sounds from any of your synth engines onto a single controller, or a multiplicity of controllers. It allows you to use as few keyboards as you are comfortable with on stage without the concern you will need a sound on a certain keyboard that you cannot put there otherwise. Remember, this product integrates all your synths into a single point of control and the boundaries between synths and controllers fade away.

In order to make you feel like you are dealing with a single instrument made up of all of your available synth engines, the user interface was designed to be streamlined and intuitive, with bubble help everywhere. While constructing scenes, the interface looks like a rack of processors, with the most common controls one click away, and expandable sections for more advanced features. Each control strip in the rack establishes a connection from any of your MIDI controllers to any of your synth engines. You can connect multiple engines to each controller, as well as merge signals from different controllers into a single engine. So for example, if you want to use your control surface sliders as Hammond B3 drawbars, you can merge signals from the control surface with the signals from the desired keyboard into the B3 emulator, virtual or otherwise, to achieve this result. For multitimbral engines, each control strip can send MIDI from any controller to a particular MIDI channel on the given engine, so you can take full advantage of the multitimbrality of your instruments.But the difference is that Midi~Kuper will be able to assign patches to each of these channels on the fly based on the scene you construct. More on scene construction later.

Scene construction view

When you put Midi~Kuper into live performance mode, the interface presents a touchable/clickable transport control strip, with additional buttons for scene selection while playing. The app is designed to receive commands wirelessly from any mobile device running Lemur (we are currently working on a proprietary, free remote control add-on for mobile devices). So during performance, you can just tap a big button on your iPhone or iPad to advance to the next scene in your song, or the next song in your performance. If you need click and backing tracks too, it also provides transport control, including song indexing for your DAW, with included configurations for Ableton Live and SONAR.

Performance control strip

Assigning scenes to songs and making set lists out of a song list is achieved through simple drag and drop operations. The layout of Midi~Kuper’s windows is such that they always try to occupy the least amount of screen space possible, allowing room for other programs you may want to operate in parallel, such as a DAW, a virtual instrument host program, or the virtual instruments’ UI’s. To do this, we abandoned the concept of a multiple document user interface with a single window within which all other windows must fit. From personal experience, this is not practical for this type of app. Instead, Midi~Kuper uses floating windows we call control strips that you can place anywhere in your workspace, including multiple monitors. It always “remembers” where you last placed these windows so you get a repeatable experience every time you launch the app.

In addition to the general things described above, Midi~Kuper has lots of features to help you during your performance, displaying lyrics and song cues that can be placed on any monitor at any position desired. This can assist the entire band to achieve repeatable consistency in song tempos and, if you have a large repertoire, present the key and other stats for the song that you may want to share with your audience. If you frequently have musicians that fill in, the song key display will assist them keeping up with the performance. It will also help that one member that plays in other bands and can’t remember what key this band plays the song in. (That’s me).

Song cue strip

Down the Rabbit Hole

Midi~Kuper is loaded with features to enhance and assist your performance. Let’s get into some of this detail.

Velocity Layering

We already discussed the ability to create splits and layers on a single keyboard using any of your available synth engines hooked to the Midi~Kuper matrix. In addition to normal layering, Midi~Kuper has the ability to construct velocity layers, again using any group of synth engines desired. For those not familiar with the concept, velocity layers allow you to determine which patch (sound) will be produced depending on how hard you play a key. So for example, a common velocity layer setup might be to have a string pad that is layered with a brass section when your playing exceeds a certain velocity threshold. So if you play softly, you will only hear the strings, but if you start playing harder the brass section will start to come in. Normally you can only produce this effect within the same physical instrument if it has that feature. Midi~Kuper allows you to create this type of layering mixing sounds from any of your available synths.  

Achieving Huge Multitimbrality

Midi~Kuper has the ability to send program change commands to the synth engines. In addition, it lets you leverage the capabilities of your multitimbral synths by seeing each MIDI channel as a path to an individual synth engine. Because some synths, especially the virtual ones, can produce undesirable glitches in the sound when executing program changes, it is best to pre-configure your multitimbral devices with the sounds you always use on all but one channel, and leave one channel for patches you may need to change. So for example, if your synth can handle 16 multitimbral parts, one on each MIDI channel, you would set up your most used 15 patches on the first 15 channels, and leave the last channel as one that you will change on the fly via MIDI program change. On the other hand, if you are using virtual analog emulations such as emulated Mini-Moogs or say a Z3TA+ virtual synth, you can set up multiple simultaneous instances of these in your computer to avoid the need for program changes in the middle of your performance. With the exception of these possible conditions, Midi~Kuper will always switch seamlessly between scenes without hung notes or notes cutting out prematurely.

Continuous Controllers Management 

Midi~Kuper also provides continuous controller (CC) processing so that you can filter out, translate or scale or even invert CC data on the fly. For example, I find that Expression (CC 11) control does a better job on certain B3 emulations than Volume (CC 7). However, my controller puts out Volume (CC 7) when I move the volume pedal. Midi~Kuper easily translates CC 7 to CC 11 in my B3 scenes. I only have to set this up once and problem solved. Another handy feature is filtering, where supposing you have a controller that is constantly putting out channel pressure signals to a synth that does not respond well to these, or gets messed up if there are too many. Midi~Kuper gives you the ability to block or filter any CC such as channel pressure on any track in your scenes to get around this issue.

Note Processors – Playing with 3 Hands

Midi~Kuper also supports note processors that can help you get more out of your playing. Currently it supports a Note Mapper and an Interval Generator, with more processors coming in future releases. The best way to illustrate how these can help your performance is to use a specific example – the piano intro of Minute by Minute by the Doobie Brothers and Michael McDonald. Yes, it’s an old one, but a great example. The intro has a left hand bass walk-up in octaves with full chords on the right hand for every two bass notes of the left. A skilled keyboard player can play this readily. However, many players with less than optimal chops may have trouble with this progression. But even if you are a skilled player that can handle the intro without trouble, this intro occurs again in the song at the same time a lead is played on an analog synth. Unless you have 3 hands or a second keyboardist in the band, your skills will not help you here. However, with Midi~Kuper’s Note Mapper, you can map the chords the right hand is supposed to play onto the bass notes played with your thumb on the left hand, and that frees up your right hand to play the lead on a second keyboard or on a split section of the same keyboard. For those players with less chops, this feature allows you to play the intro with just two fingers! I will have a demo video on this in our YouTube channel shortly.

The other processor currently available, the Interval Generator, can produce intervals that follow a set scale (chromatic, major, harmonic minor, Dorian minor, etc.) to help you do similar things. For example, it’s great for salsa piano riffs and accompaniments, jazz bass progressions, as when Oscar Peterson plays in 10ths that my hand cannot reach, and other special effects.

The Possibilities

Now stop and think about that for a moment. You now have an app that can create layers, splits, transpositions and other useful functions using ALL your synths. These magic scenes or combis that you could only create within the confines of a single instrument can now be created using sounds from any and all of your stage synths. You are no longer confined to do this from a single instrument. It is now possible as an example to split an 88 note keyboard into three sections, one with a piano coming from a physical synth or say Spectrasonics’ Keyscape on the computer, another with a B3 coming from an emulator box, and the third section with a layer of horns from another channel on the first physical synth and string from a Kontakt sampler in your computer. And you can call up such scenes on the fly without any delays or glitches because Midi~Kuper makes sure that the transitions between scenes are handled seamlessly.

A Note about Virtual Instrument Use on Stage

 One of the reasons I designed Midi~Kuper was that I wanted to be able to use the same virtual instruments I love in my computer when I am composing, but in a live stage. The usability of virtual instrument hosts I have looked at is limited, and while using a DAW like SONAR or Ableton Live as virtual instrument hosts has its great advantages, they are not set up to be particularly friendly managing changes during a live performance unless you are playing DJ style.

Midi~Kuper is a great complement to these hosts because of its ability to leverage virtual instruments in such a way as to achieve multitimbrality. Suppose you have to reproduce the sound of three simultaneous Mini-Moog patches, but you cannot afford to put three of them on stage (they are expensive). Well, if you have a good virtual version of the Mini-Moog or a great virtual synth like Z3TA+, you can run three or more instances of these synths simultaneously using different patches and Midi~Kuper can combine them, layer them, split them, transpose them by way of a scene setup, send program changes, and then combine them with all your other synths without consideration for physical boundaries.

The possibilities are endless, allowing you to even control devices such as voice processors and guitar pedals based on the song you are performing. Here is an example diagram of a setup I have used on stage.

Notice that Midi~Kuper now has the ability to not only manage my synths and controllers, but also to change the sound on my guitar, and settings on the vocal processor, as well as sending chord information for the voice processor to follow.

Managing your Performance

Midi~Kuper is not only capable of creating these instantly callable, very complex scenes using any or all of your available synth engines, but it provides this capability in pursuit of its primary goal, which is to elevate the quality, professionality and variety of your live performance while at the same time, simplifying your workflow. Let’s examine these details further.

Midi~Kuper can maintain your song lists, as well as data associated with each of these songs. As you create scenes (single selections or combinations of selections from any of your synths mapped onto your selected controllers), you can assign them to one or more songs. Each song can have one or more scenes in sequence assigned to it. Scenes can be repeated within the same song if you want to select them sequentially and they do in fact repeat. For example, two verses with one scene, a bridge with another scene and back to the verse scene. If your song performance is more free-flow, Midi~Kuper allows you to randomly call up any of the scenes assigned to the song as you perform at the touch of a button on your remote control device or touch screen.

Scenes can be assigned to more than one song. The separation of song and scene was made to give you this flexibility.

Once you have assigned at least one scene to every song, you can create set lists of songs. When you are ready to perform a given set, you can put Midi~Kuper in performance mode, and then just step through the scenes in your songs at the tap of a big button on your remote device or on your touch screen. In the meantime, Midi~Kuper will display on its cue strip the name of the song, its key, its tempo (with a blinking indicator), its author and release date if you have entered this data for the song. If you entered lyrics for the song, a separate window will pop up at your designated monitor, size and location, to display the lyrics.

Supported Environments and Future Plans

Don’t quit reading on me yet, but currently, Midi~Kuper works only on Windows 7 or above in most processors that run this operating system. I have run it on a $170 tablet with an Atom processor running Windows 10, taking advantage of its touch screen capability.

We are keenly aware of the fact that most musicians prefer the Mac environment, so we are working diligently to release both Mac and iOS versions. We do not have a firm release target date at the time of this writing, but we fully expect to complete the effort in 2018. We also just released a freeware version that will have all features except note and CC processing. The full version (Midi~Kuper Pro) gives you a fully functional 30-day free trial, after which you can purchase a license to activate the product permanently. You can get your copies at 

Putting it All Together

Midi~Kuper is an application that allows you to create a flexible MIDI matrix that will reconfigure your gear’s MIDI routing on the fly, with features that were only previously possible within the confines of synth workstations. Midi~Kuper allows you to break these boundaries and limitations and treat all your gear as an integrated set of synth engines that can be combined, split, layered and controlled as if they were a single instrument. It also manages your live performance, drastically changing your sound seamlessly and instantly, using any or all your available synth engines and controllers in any configuration you desire. So give it a try, and let me know what you think.



...

Home Page – µC Kuper

Read more about Midi~Kuper and get your free trial or freeware copy at our website. 

May 26-Why ReWire Is Very Cool

ReWire is a software protocol that allows two (or sometimes more) software applications to work together as one integrated program. For example, suppose you wish your DAW of choice had Propellerhead Reason’s roster of way cool virtual instruments, but you don’t want to learn a different DAW. No problem: use ReWire with your DAW, and get Reason into the mix.

ReWire requires a client application (also called the synth application) that plugs into a ReWire-compatible host program (also called the mixer application) such as Cakewalk, Cubase, Digital Performer, Live, Logic, Pro Tools, Samplitude, Studio One Pro, etc. In the host, you’ll have an option to insert a ReWire device. The process is very much like inserting any virtual instrument, except that you’re plugging in an entire program, not just an instrument. You usually need to open the host first and then any clients, and close programs in the reverse order. You won’t break anything if you don’t, but you’ll likely need to close your programs, then re-open them in the right order. 

ReWire sets up relationships between the host and client programs.

Here’s how the client and host work together.

  • The client’s audio outputs stream into the host’s mixer.
  • The host and client transports are linked, so that starting or stopping either one starts or stops the other.
  • Setting loop points in either application affects both applications.
  • MIDI data recorded in the host can flow to the client (excellent for triggering soft synths).
  • Both applications can share the same audio interface.

Rewire is an interconnection protocol that doesn’t require much CPU power, but note that you’ll need a computer capable of running two (possibly powerful) programs simultaneously. Fortunately most modern computers can indeed handle ReWired programs, so find out for yourself what this protocol can do.

Using MIDI in the OnSong App

“I can consolidate my foot pedals down to just a few that let me move through my song and set list. Then OnSong acts as a controller using Bluetooth MIDI and reconfigures my pedal board and keyboards specifically for each song.”

 


by Jason Kichline

MIDI or “Musical Instrument Digital Interface” is a powerful digital communications protocol that ushered in the age of electronic music. Even though it was first released in 1984, it’s use is still prevalent in modern computing. Apple has built CoreMIDI into iOS, making the iPad and iPhone great tools for musicians on-stage. In addition, MIDI is now being used to communicate between music apps on the device, as well as external devices.

Setting Up MIDI

The first step in using MIDI on an iOS device is connecting the standard MIDI or USB cable to the device. This can be accomplished with:

  • MIDI Adapters are devices that connect to your 30-pin or Lightning port and provide traditional MIDI “DIN-5” connections.
  • USB with Camera Connection Kit allows MIDI devices with a USB port to be connected directly to the iOS device.
  • MIDI over WiFi can also be used as long as you have a computer or host device to create the MIDI network session.

Triggering Actions from MIDI

Once you have a MIDI device connected, you can map MIDI signals to OnSong actions. This can be used to scroll the chord chart, navigate your set, or trigger backing tracks. Any action that can be performed in OnSong can be mapped to MIDI in the MIDI Triggers screen.

*Note: MIDI devices may send signals differently depending on their intended use. For instance, the iRig Blueboard device becomes a latching pedal with control changes. OnSong has advanced MIDI Settings to handle some of these differences.

Sending and Receiving MIDI

OnSong can also be used to send MIDI commands to other MIDI devices when songs are viewed or when sections are selected. You can configure these commands by tapping and holding the title of the song or a section of your song to use the Section Mapping Menu.

In addition, you can have OnSong switch to a song by listening for specific MIDI commands. These are typically set up using Metadata with the Metadata Editor in the Song Editor.

Virtual MIDI

We typically think of MIDI as having to do with wires that connect instruments together. MIDI could also use wireless networking like WiFi or Bluetooth. But MIDI can also operate directly between apps with Virtual MIDI.

Virtual MIDI has all of the power of MIDI but travels between apps. You can configure which apps can send and receive MIDI with OnSong using Sources and Destinations in the MIDI Settings Menu.






...

OnSong | Chord Chart Management for Musicians

Chord chart management for musicians for use on tablets and mobile devices

MIDI and the Surface Pen

Pens and stylus’ have been employed as computer interaction devices for quite some time now. Most commonly they were used along with peripheral graphics tablets to give a more natural flow to the artist or designer than a mouse could muster. With the release of the Surface Pro hybrid laptop by Microsoft in 2012 they brought a digital pen along to party that could work directly on the screen. It was intended to bridge the gap between the demands of desktop software and the tablet touch screen form factor. In a mouse and track-pad free computing environment how better to access the finer details that your thick fingertips couldn’t manage. 

The advantages for the artist become quickly apparent. As the Surface Pro has evolved the graphical power has gotten to the point where it’s a completely competent sketching, drawing and design platform. But there’s another group of artists for whom the digital pen has an awful lot of potential, and that’s the musician. 

This is probably most joyously demonstrated by the Windows 10 app Staffpad. Staffpad takes the idea of writing music completely literally and presents you with a blank sheet of manuscript paper and asks you to start writing. Combining the digital pen with hand writing recognition Staffpad is able to interpret your hand written notes into digital MIDI information directly onto a score. It can then be played back through a virtual orchestra. It’s a stunning piece of work and remarkably fluid and creative to use. 

Most of us approach music creation in a more sequenced format. The pen has a lot to offer here as well. Entering notes into a piano roll immediately comes to mind, as does the editing of notes, the trimming of clips or moving blocks in an arrangement. Consider drawing in track automation, with a pen rather than a mouse. How much more fluid and natural could that be?

In many ways the pen feels like it’s simply replacing the actions of a mouse – but it doesn’t quite work like that. The Surface Pen works through a combination of technology in the pen and a layer of corresponding technology on the screen. It’s not just touch-screen technology, you can’t take the Surface Pen and use it on another brand of screen, it will only work on Surface products. While that affords the technology a great deal of power it can also trip up software that isn’t able to interpret the technology properly. In many cases the pen works just like a mouse replacement, but in others it can cause weird or no behaviour at all.

When PreSonus first released their new touch-enabled version 3 of Studio One the reaction to the Surface Pen when running on the Surface Pro 3 was to get quickly confused and then lock up. In Cakewalk Sonar, again touch-enabled, there were areas in the software that completely refused to acknowledge the presence of a pen on the screen. Both of those DAWs have far better support for it now. Ableton Live appeared to work with both touch and the pen without any trouble except that when grabbing a fader or knob control the value would leap between the maximum and minimum making it impossible to set it accurately. Adding support for “AbsoluteMouseMode” in a preferences file cured that particular oddity. 

Where it’s been most unflinchingly successful is within Steinberg’s Cubase and Avid’s Pro Tools neither of which has expressed any interest in touch or pen interaction – but it simply works anyway. From entering and editing notes to drawing in long wiggly lines of modulation and automation the pen becomes a very expressive tool.

However, for the full immersion that the pen can offer, this tends to mean eschewing the keyboard. When you are leaned in, as I mentioned earlier, having to then pull back to use a keyboard shortcut can be rather jarring and interrupting to your workflow. There’s a certain amount you can do with the on-screen virtual keyboard but it can completely cover what it is you’re trying to edit, so it’s not ideal. This highlights what I see as being the current flaw in the Surface Pen workflow – the lack of a relevant, customisable toolbar.

When editing notes or an arrangement with the pen the ability to do simple tasks such as copy and paste become cumbersome. You can evoke a right-click with the squeeze of a button and then select these task from the list, or you can glide through the menu system but neither of these options are as elegant as a simple Ctrl-C and Ctrl-V. You can quickly extend that to other actions – opening the editor, or the mixer, duplicating, setting loop points there’s a whole raft of commands that are hidden away behind menus or keyboard shortcuts that are annoying to reach with just the pen for input. Adding a simple macro toolbar with user definable keyboard shortcuts would greatly enhance the pen’s workflow. It’s possible to do this with third party applications but it really needs support at the OS level.

This is something Dell have considered with their Canvas touch-screen and digital pen system. They have incorporated floating “palettes” that are little toolbars to access useful keyboard shortcuts. Some DAWs, such as Bitwig Studio and PreSonus Studio One, have fingerable toolbars that can perform a similar function – but something more global would be helpful.

With the release of the Surface Pro (2017) Microsoft have introduced an improved Surface Pen with 4 times the resolution of the previous version. Although more relevant to the artist who draws, it’s interesting to see pen support improving in many DAWs. It’s usefulness is becoming more apparent and if you consider the Dell Canvas and the iPad Pro Pencil, along with the development of the Surface into the larger Surface Studio and laptop form factors, it’s also becoming more widespread.

At the time of writing only one DAW manufacturer has stepped up to push the digital pen into more than just emulating mouse tasks. Bitwig Studio has some special MPE (Multidimensional Polyphony Expression) functionality that allows you to map the pen pressure to parameters on MPE compatible virtual instruments. More on that in another article, but hopefully more creative uses will emerge as this gains popularity.

The digital pen offers many creative opportunities. It unhinges you from the mouse/keyboard paradigm and pushes you into a more natural and fluid way of working. It lacks support in some software and there’s some work to be done on optimising the workflow by combining it with a toolbar, but it offers a different and creative approach to musical computer interaction.

Here’s a video of me reviewing the Microsoft Surface Book for music production which has a lot of pen use and examples in it. There’s plenty more on the YouTube channel:

BLE-MIDI, Sonar and Zivix Jam Stick- A New Way to Enter MIDI into your DAW

This is an article that was originally posted on the Cakewalk blog and they kindly gave us permission to excerpt it here on MIDI.org. 

Greetings! My name is Mike Green, Music Product Specialist at Zivix, we make the jamstik+ portable SmartGuitar & PUC+ wireless MIDI link. I’m primarily a guitar player, and in my 15+ years of musical composition, MIDI has enabled me to write and record quickly. In full disclosure; I’m a lousy keyboardist. The jamstik+ and Bluetooth MIDI’s availability for Windows 10 has revolutionized what used to be a point-and-click endeavor. Now I can use virtual instruments in Cakewalk’s SONAR software controlled by the jamstik+ digital guitar so I can enter in data wirelessly via Bluetooth MIDI – using the guitar skills that come most naturally to me.

by Mike Green, Music Product Specialist at Zivix


Make Sure Your PC is Bluetooth 4.0 Compatible.

With recent updates in the Windows 10 OS, SONAR’s DAW takes advantage of using Bluetooth 4.0 Low Energy (BLE) to connect Bluetooth enabled MIDI devices. Now, almost all operating systems have this capability, so the performance is only going to get better from here, and more controllers will start “Roli” ‘ing in (haha). Check the specs on your PC (look for Bluetooth in Device Manager) to see if your PC is Bluetooth 4.0 compatible. If not, you can always try various BLE Dongles like this one by Asus.

Connecting is easy

  1. Pair to Windows 10
  2. Open SONAR
  3. Enable your MIDI Device In/Out Check-boxes in Preferences
  4. Select your Soft-Synth
  5. Play!
For more on Sonar, Zivix and BLE-MIDI, check out the full article below and look for links to special deals. 

Now THAT’s a Horn Solo

Music is a visual language, too. Composer Andrew Huang used the piano roll editor in his MIDI sequencer to create sound from a picture of a unicorn. Each dot and line outlining the mythical creature triggers a MIDi note. To make the notes harmonize, Huang had to think both visually and musically. See his creative approach in the video.

(Hat tip to CMUSE.)

5 MIDI Quantization Tips

Make quantization work for you, not against you 

Quantization is the process of moving MIDI data (usually notes, but also potentially other data) that’s out of time to a rhythmic “grid.” For example, if a kick drum is slightly behind the beat, quantization can move it right on the beat. Quantization was controversial enough when it was limited to MIDI, but now that you can quantize audio, it’s even more of an issue. Although some genres of music—like electro and other EDM variants—work well with quantization, excessive quantization can compromise a piece of music’s human feel. 

Some people take a “holier than thou” approach to quantization by saying it’s for musical morons who lack the chops to get something right in the first place. These people, of course, never use quantization…well, at least while no one’s looking. But quantization has its place; it’s the ticket to ultra-tight grooves, and a way to let you keep a first and inspired take, instead of having to play a part over and over again to get it right—and lose the human feel by beating a part to death. Like any tool, quantization can be used or misused, so let’s concentrate on how to make quantization work for you—and avoid giving an overly rigid, non-musical quality to your work. 

TRUST YOUR FEELINGS, LUKE 

Computers are terrible music critics. Forcing music to fit the rhythmic criteria established by a machine is silly—it’s real people, with real emotions, who make and listen to music. To a computer, having every note hit exactly on the beat may be desirable, but that’s not the way humans work. 

There’s a fine line between “making a mistake” and “bending the rhythm to your will.” Quantization removes that fine line. Yes, it gets rid of the mistakes, but it also gets rid of the nuances. 

When sequencers first appeared, musicians would often compare the quantized and non-quantized versions of their playing. Invariably, after hearing the quantized version, the reaction would be a crestfallen “gee, I didn’t realize my timing was that bad.” But in many cases, the human was right, not the machine. I’ve played some solo lines were notes were off as much as 50 milliseconds from the beat, yet they sounded right. Tip #1: You dance; a computer doesn’t. You are therefore much more qualified than a computer to determine what rhythm sounds right. 

WHY QUANTIZATION SHOULD BE THE LAST THING YOU DO 

Some people quantize a track as soon as they’ve finished playing it. Don’t! In analyzing unquantized music, you’ll often find that every instrument of every track will tend to rush or lag the beat together. In other words, suppose you either consciously or unconsciously rush the tempo by playing the snare a bit ahead of the beat. As you record subsequent overdubs, these will be referenced to the offset snare, creating a unified feeling of rushing the tempo. If you quantize the snare part immediately after playing, then you will play to the quantized part, which will change the feel. 

Another possible trap occurs if you play several unquantized parts and find that some sound “off.” The expected solution would be to quantize the parts to the beat, yet the “wrong” parts may not be off compared to the absolute beat, but to a part that was purposely rushed or lagged. In the example given above of a slightly rushed snare part, you’d want to quantize your parts in relation to the snare, not a fixed beat. If you quantize to the beat the rhythm will sound even more off, because some parts will be off with respect to absolute timing, while other parts will be off with respect to the relative timing of the snare hit. At this point, most musicians mistakenly quantize everything to the beat, destroying the feel of the piece. Tip #2: Don’t quantize until lots of parts are down and the relative—not absolute—rhythm of the piece has been established. 

SELECTIVE QUANTIZATION 

Often only a few parts of a track will need quantization, yet for convenience musicians tend to quantize an entire track, reasoning that it will fix the parts that sound wrong and not affect the parts that sound right. However, the parts that sound right may be consistent to a relative rhythm, not an absolute one. 

The best approach is to go through a piece, a few measures at a time, and quantize only those parts that are clearly in need of quantization—in other words, they sound wrong. Very often, what’s needed is not quantization per se but merely shifting an offending note’s start time. Look at the other tracks and see if notes in that particular part of the tune tend to lead or lag the beat, and shift the start time accordingly. Tip #3: If it ain’t broke, don’t fix it. Quantize only the notes that are off enough to sound wrong. 

BELLS AND WHISTLES

Modern-day quantization tools, whether for MIDI or audio, offer many options that make quantization more effective. One of the most useful is quantization strength, which moves a note closer to the absolute beat by a particular percentage. For example, if a note falls 10 mlliseconds ahead of the beat, quantizing to 50% strength would place it 5 milliseconds ahead of the beat. This smooths out gross timing errors while retaining some of the original part’s feel (Fig. 1)

Fig. 1: The upper window (from Cakewalk SONAR) shows standard Quantization options; note that Strength is set to 80%, and there’s a bit of Swing. The lower window handles Groove Quantization, which can apply different feels by choosing a “groove” from a menu.

Some programs offer “groove templates” (where you can set up a relative rhythm to which parts are quantized), or the option to quantize notes in one track to the notes in another track (which is great for locking bass and drum parts together). Tip #4: Study your recording software’s manual and learn how to use the more esoteric quantization options.

EXPERIMENTS IN QUANTIZATION STRENGTH

Here’s an experiment I like to conduct during sequencing seminars to get the point across about quantization strength.

First, record an unquantized and somewhat sloppy drum part on one track. It should be obvious that the timing is off.

Then copy it to another track, quantize it, and play just that track back; it should be obvious that the timing has been corrected. Then copy the original track again but quantize it to a certain strength—say, 50%. It will probably still sound unquantized. Now try increasing the strength percentage; at some point (typically in the 70% to 90% range), you’ll perceive it as quantized because it sounds right. Finally, play back that track along with the one quantized to 100% strength and check out the timing differences, as evidenced by lots of slapback echoes. If you now play the 100% strength track by itself, it will sound dull and artificial compared to the one quantized at a lesser strength. Tip #5: Correct rhythm is in the ear of the beholder, and a totally quantized track never seems to win out over a track quantized to a percentage of total quantization.

Yes, quantization is a useful tool. But don’t use it indiscriminately, or your music may end up sounding mechanical—which is not a good thing unless, of course, you want it to sound mechanical!

How to Find MIDI Sequencer “Gotchas”

Fix those little “gotchas” before they make it into the final mix

by Craig Anderton

MIDI sequencing is wonderful, but it’s not perfect—and sometimes, you’ll be sandbagged by problems like false triggers (e.g., what happens when you brush against a key accidentally), having two different notes land on the same beat when quantized, voice-stealing that cuts off notes abruptly, and the like. These glitches may not be obvious when other instruments are playing, but they nonetheless can muddy up a piece or even mess up the rhythm. Just as you’d “proof” your writing, it’s a good idea to “proof” sequenced tracks.

Begin by listening to each track in isolation; this reveals flaws more readily than listening to several tracks simultaneously. Headphones can also help, as they may reveal details you’d miss over speakers. As you listen, also check for voice-stealing problems caused by multi-timbral soft synths running out of voices. Sometimes if notes are cut off, merely changing note durations to prevent overlap—or deleting one note from a chord—will solve the problem. But you may also need to dig deeper into some other issues, such as . . .

NOTES WITH ABNORMALLY LOW VELOCITIES OR DURATIONS

Even if you can’t hear these notes, they still use up voices. They’re easy to find in an event list editor, but if you’re in a hurry, do a global “remove every note with a velocity of less than X” (or for duration, “with a note length less than X ticks”) using a function like Cakewalk Sonar’s DeGlitch option (Fig. 1).

Fig. 1: Sonar’s DeGlitch function is deleting all notes with velocities under 10 and durations under 10 milliseconds.

Note that most MIDI guitar parts benefit greatly from a quick cleanup of notes with low velocities or durations.

UNWANTED AFTERTOUCH (CHANNEL PRESSURE) DATA

If your master controller generates aftertouch (pressure) but a patch isn’t programmed to use it, you’ll be recording lots of data that serves no useful purpose. When driving hardware synths, this can create timing issues and there may even be negative effects with soft synths if you switch from a sound that doesn’t recognize aftertouch to one that does.

Note that there are two types of aftertouch—channel aftertouch, which generates one message that correlates to all notes being pressed, and polyphonic aftertouch, which generates individual messages for each note being pressed. The latter sends a lot of data down the MIDI stream, but as there are few keyboard controllers with polyphonic aftertouch, it’s unlikely you’ll encounter this problem.

Steinberg Cubase’s Logical Editor (Fig. 2) is designed for removing specific types of data, and one useful application is removing unneeded aftertouch data.

Fig. 2: In this basic application of Cubase’s Logical Editor, all aftertouch data is being removed.

Note that many recording programs disable aftertouch recording as the default, but if you enable it at some point, it may stay enabled until you disable it again.

OVERLY WIDE DYNAMIC VARIATIONS

This can be a particular problem with drum parts played from a keyboard—for example, some all-important kick drum hits may be much lower than others. There are two fixes: Edit individual notes (accurate, but time-consuming), or use a MIDI edit command that sets a minimum or maximum velocity level, like the one from Sony Acid Pro (Fig. 3). With pop music drum parts, I often limit the minimum velocity to around 60 or 70.

Fig. 3: Sony’s Acid Pro makes it easy to restrict MIDI dynamics to a particular range of velocity values.

DOUBLED NOTES

If you “bounce” a key (or drum pad, for that matter) when playing a note, two triggers for the same note can end up close to each other. This is also very common with MIDI guitar. Quantization forces these notes to hit on the same beat, using up an extra voice and producing a flanged/delayed sound. Listening to a track in isolation usually reveals these flanged notes; erase one (if two notes hit on the same beat, I generally erase the one with the lower velocity value). Some programs offer an edit function that deletes duplicates automatically, such as Avid Pro Tools’ Delete Duplicate Notes function (Fig. 4).

Fig. 4: Pro Tools has a menu item dedicated specifically to eliminating duplicate MIDI notes.


NOTES OVERLAP WITH SINGLE-NOTE LINES

This applies mostly to bass and wind instruments. In theory, with single-note lines you want one note to end before another begins. Even slight overlaps make the part sound more mushy (bass in particular loses “crispness”) but what’s worse, two voices will briefly play where only one is needed, causing voice-stealing problems. Some programs let you fix overlaps as a Note Duration editing option.

However note that with legato mode, you do want notes to overlap. With this mode, a note transitions smoothly into the next note, without re-triggering an envelope when the next note occurs. Thus in a series of legato notes, the envelope attack occurs only for the first note of the series. If the notes overlap without legato mode selected, then you’ll hear separate articulations for each note. With an instrument like bass, legato mode can simulate sliding from one fret to another to change pitch without re-picking the note.

Craig Anderton is an Executive Vice-President at Gibson Brands, and Editorial Director of Harmony Central. He has played on, mixed, or produced over 20 major label releases (as well as mastered over a hundred tracks for various musicians), and written over a thousand articles for magazines like Guitar Player, Keyboard, Sound on Sound (UK), and Sound + Recording (Germany). He has also lectured on technology and the arts in 38 states, 10 countries, and three languages. This article is reprinted with the express written permission of HarmonyCentral.