fbpx
Skip to main content

GeoShred Studio for MacOS Released

GeoShred introduces a new paradigm for musical instruments, offering fluid expressiveness through a performance surface featuring the innovative “Almost Magic” pitch rounding. This cutting-edge software combines a unique performance interface with physics-based models of effects and musical instruments, creating a powerful tool for musicians. Originally designed for iOS devices, GeoShred is now available as an AUv3 plug-in for desktop DAWs, expanding its reach and integration into professional music production workflows.

GeoShred Studio, an AUv3 plug-in, runs seamlessly on macOS devices. Paired with GeoShredConnect, musicians can establish a MIDI/MPE connection between their iOS device running GeoShred and GeoShred Studio, enabling them to incorporate GeoShred’s expressive multi-dimensional control into their desktop production setup. This connection allows users to perform and record tracks from their iOS device as MIDI/MPE, which can be further refined and edited in the production process.

iCloud integration ensures that preset edits are synchronized between the iOS and macOS versions of GeoShred. For example, a preset saved on the iOS version of GeoShred automatically syncs with GeoShred Studio, providing a seamless experience across platforms.

Equipped with a built-in guitar physical model and 22 modeled effects, GeoShred Studio offers an impressive array of sonic possibilities. For those looking to expand their musical palette, an additional 33 physically modeled instruments from around the globe are available as in-app purchases (IAPs). These instruments range from guitars and bowed strings to woodwinds, brass, and traditional Indian and Chinese instruments.

GeoShred Studio is designed to be performed expressively using GeoShred’s isomorphic keyboard.

For users who don’t own the iOS version, the free GeoShred Control MPE controller (https://apps.apple.com/us/app/geoshred-control/id1336247116) is available for use with GeoShred Studio.

GeoShred Studio is also compatible with MPE controllers, conventional MIDI controllers, and even breath controllers, offering a wide range of performance options. GeoShred Studio is free to download, but core functionality requires the purchase of GeoShred Studio Essentials, which includes distinct instruments separate from those in the iOS/iPadOS app, and iOS/iPadOS purchases do not transfer.

Works with MacOS Catalina or greater.

GeoShred, unleash your musical potential!

We are offering a 25% discount on all iOS/iPadOS and MacOS products in celebration of GeoShred 7, valid until October 10, 2024. Pricing table at moforte.com/pricing


Caedence: browser-based music collaboration and performance software

Caedence is a browser-based music collaboration and performance software that allows people to sync and customize virtually every aspect of performances across devices – in real time – to help them learn faster, play better, and create amazing performances with less time, money, and effort. 

Now in open beta, Caedence began as a passion project, but with the help and support of the MIDI Association has grown a lot.

How It All Started

It was 2018 in Minneapolis, Minnesota. Caedence founder Jeff Bernett had just joined a new six-person cover band and taken on the role of de facto Musical Director. The group had enormous potential, but also very limited time to prepare three hours of material for performance. Already facing the usual uphill battle of charting songs and accommodating learning styles, Jeff’s challenge was further complicated by a band leader who insisted on using backing tracks – famous for making live performance incredibly unforgiving.

Jeff knew of a few existing solutions that could help. But nothing got to the heart of the issue his group was experiencing: the jarring and stifling disconnect between individual practice, band rehearsal, and live performances. This disconnect is known and felt by all musicians. So why wasn’t there anything in the market to address it? What solution could simplify the process of learning music, but also enhance the creative process and elevate live performances – all while being easily accessible and simple to use? Enter the idea for Caedence – a performance and collaboration software that would allow musicians to practice, rehearse, and perform in perfect sync.

Finding The MIDI Association

Energized about creating a solution that could revolutionize music performance, Jeff, along with partners Terrance Schubring and Anton Friant, swiftly created a working prototype. After successfully sending MIDI commands from Caedence to control virtual & hardware instruments, guitar effect pedals, and stage lighting, the team realized that they truly had something great on their hands. MIDI was the catalyst that transformed Caedence from a useful individual practice tool into a fully conceived live music performance and collaboration solution.

Jeff had previously joined the MIDI Association as an individual, all the while connecting with other members to learn as much as he could. His enthusiasm attracted Executive Board member Athan Billias, who reached out to learn more about what Jeff was working on. After connecting, it was immediately clear that Caedence and the MIDI Association had natural synergy. Caedence soon joined as a Corporate Member, and Athan generously took on an unofficial advisory role for the young startup.

A Transformative Collaboration

Joining the MIDI Association was a game-changer for Caedence – both for the software itself and the Caedence team. With access to the Association’s wealth of knowledge and resources, the Caedence team was able to fix product bugs and create features they hadn’t even considered before.

With the software in a good place, Caedence was ready for a closed beta release. In an effort to sign up beta testers, the team headed to the NAMM Show in 2023 as part of the MIDI Association cohort. Attendees were attracted to the Caedence booth – its strong visuals and interactive nature regularly drawing a crowd to the area. The team walked people through the features of the platform, demonstrating how it could help musicians learn faster, play better, and create more engaging performances.

And then an unexpected thing happened. A high school music teacher from Oregon with a modern band program approached the team and asked about using Caedence in the classroom. What followed was a series of compelling conversations – and the identification of a new market for Caedence.

Open Beta and Beyond

In July of 2024, Caedence reached a huge milestone. The software began its open beta, ready for a broader audience and the feedback that will come with it. Schools across the country are ready to leverage Caedence in the 2024-2025 school year. You can sign up for the open beta on the Caedence website.

For Minneapolis makers at the intersection of tech, art, music, and education

Conferences are costly. Networking is lame. Happy hours are fun, but often less than productive. So Caedence built something different.

Caedence is also hosting its first ever event called WAVEFRONT on August 1st in Minneapolis, an opportunity for local makers at the intersection of tech, art, music, and education to exchange ideas and encourage community amongst established and emerging talent alike.

WAVEFRONT is a bespoke meeting of innovators, educators, entrepreneurs and artists, hosted in an environment purpose-built to facilitate the exchange of ideas and encourage community – amongst established and emerging talent alike. 

WAVEFRONT is sponsored by several MIDI Association companies.

If you would like to learn more about WAVEFRONT please visit wavefrontmn.com.

3 Best AI Music Generators for MIDI Creation

A new generation of AI MIDI software has emerged over the past 5 years. Google, OpenAI, and Spotify have each published a free MIDI application powered by machine learning and artificial intelligence.

The MIDI Association reported on innovations in this space previously. Google’s AI Duet, their Music Transformer, and Massive Technology’s AR Pianist all rely on MIDI to function properly. We’re beginning to see the emergence of browser and plugin applications linked to cloud services, running frameworks like PyTorch and TensorFlow.

In this article we’ll cover three important AI MIDI tools – Google Magenta Studio, OpenAI’s MuseNet, and Spotify’s Basic Pitch MIDI converter. 

Google Magenta Studio 

Google Magenta is a hub for music and artificial intelligence today. Anyone who uses a DAW and enjoys new plugins should check out the free Magenta Studio suite. It includes five applications. Here’s a quick overview of how they work:

  • Continue – Continue lets users upload a MIDI file and leverage Magenta’s music transformer to extend the music with new sounds. Keep your temperature setting close to 1.0-1.2, so that your MIDI output sounds similar to the original input but with variations.
  • Drumify – Drumify creates grooves based on the MIDI file you upload. They recommend uploading a single instrumental melody at a time, to get the best results. For example, upload a bass line and it will try to produce a drum beat that compliments it, in MIDI format.
  • Generate – Maybe the closest tool in the collection to a ‘random note generator’, Generate uses a Variational Autoencoder (MusicVAE) and has trained on millions of melodies and rhythms within its dataset.
  • Groove – This nifty tool takes a MIDI drum track and uses Magenta to modify the rhythm slightly, giving it a more human feel. So if your music was overly quantized or had been performed sloppily, Groove could be a helpful tool.
  • Interpolate This app asks you for two separate MIDI melody tracks. When you hit generate, Magenta composes a melody that bridges them together.

The Magenta team is also responsible for Tone Transfer, an application that transforms audio from one instrument to another. It’s not a MIDI tool, but you can use it in your DAW alongside Magenta Studio.

OpenAI MuseNet 

MuseTree – Free Nodal AI Music Generator


OpenAI
is a major player in the AI MIDI generator space. Their Dalle 2 web application took the world by storm this year, creating stunningly realistic artwork and photographs in any style. But what you might not know is that they’ve created two major music applications, MuseNet and Jukebox.

  • MuseNet – MuseNet is comparable to Google’s Continue, taking in MIDI files and generating new ones. But users can constrain the MIDI output to parameters like genre and artist, introducing a new layer of customization to the process.
  • MuseTree – If you’re going to experiment with MuseNet, I recommend using this open source project MuseTree instead of their demo website. It’s a better interface and you’ll be able to create better AI music workflows at scale.
  • Jukebox – Published roughly a year after MuseNet, Jukebox focuses on generating audio files based on a set of constraints like genre and artist. The output is strange, to say the least. It does kind of work, but in other ways it doesn’t. The application can also be tricky to operate, requiring a Google Colab account and some patience troubleshooting the code when it doesn’t run as expected. 

Spotify Basic Pitch (Audio-to-MIDI)

Spotify’s Basic Pitch: Free Audio-To-MIDI Converter

Spotify is the third major contender in this AI music generator space. A decade ago, in 2013, they published a mobile-friendly music creation app called Soundtrap. So they’re no stranger to music production tools. As for machine learning, there’s already a publicly available Spotify AI toolset that powers their recommendation engine. 

Basic Pitch is a free browser tool that lets you upload any song as an audio file and convert it into MIDI. Basic pitch leverages machine learning to analyze the audio and predict how it should be represented in MIDI. Prepare to do some cleanup, especially if there’s more than one instrument in the audio. 

Spotify hasn’t published a MIDI generator like MuseNet or Magenta Studio’s Continue. But in some ways Basic Pitch is even more helpful, because it generates MIDI you can use right away, for a practical purpose. Learn your favorite music quickly!

 The Future of AI MIDI Generators

The consumer applications we’ve mentioned, like Magenta Studio, MuseTree, and Basic Pitch, will give you a sense of their current capabilities and limitations. For example, Magenta Studio and MuseTree work best when they are fed special types of musical input, like arpeggios or pentatonic blues melodies. 

Product demos often focus on the best use cases, but as you push these AI MIDI generators to their limits, the output becomes less coherent. That being said, there’s a clear precedent for future innovation and the race is on, amongst these big tech companies, to compete and innovate in the space.

Private companies, like AIVA and Soundful, are also offering AI music generation for licensing. Their user-friendly interfaces are built for social media content creators that want to license music at a lower cost. Users create an account, choose a genre, generate audio, and then download the original music for their projects.

Large digital content libraries have been acquiring AI music generator startups in recent years. Apple bought up a London company called AI Music in February 2022, while ShutterStock purchased Amper Sounds in 2020. This suggests a large upcoming shift in how licensed music is created and distributed.

At the periphery of these developments, we’re beginning to see robotics teams that have successfully integrated AI music generators into singing, instrument-playing, animatronic AI music robots like Shimon and Kuka. Built by the Center for Music Technology at Georgia Tech, Shimon has performed live with jazz groups and can improvise original solos thanks to the power of artificial intelligence. 

Stay tuned for future articles, with updates on this evolving software and robotics ecosystem. 

Ableton-May is MIDI Month Platinum Sponsor

We make Live, Push and Link — unique software and hardware for music creation and performance. With these products, our community of users creates amazing things.
Ableton was founded in 1999 and released the first version of Live in 2001. Our products are used by a community of dedicated musicians, sound designers, and artists from across the world.

Making music isn’t easy. It takes time, effort, and learning. But when you’re in the flow, it’s incredibly rewarding.We feel the same way about making Ableton products. The driving force behind Ableton is our passion for what we make, and the people we make it for.


Song Maker Kit

The ROLI Songmaker Kit is comprised of some of the most innovative and portable music-making devices available. It’s centered around the Seaboard Block, a 24-note controller featuring ROLI’s acclaimed keywave playing surface. It’s joined with the Lightpad Block M touch controller, and the Loop Block control module, for comprehensive control over the included Equator and NOISE software. Complete with a protective case, the ROLI Songmaker Kit is a powerful portable music creation system.

The Songmaker Kit also included Ableton Live Lite and Ableton is also a May MIDI Month platinum sponsor. 


Roli and Ableton Live Lite

This YouTube video show how to use Roli blocks with Ableton Live. 


Brothers Marco and Jack Parisi recreate a Michael Jackson classic hit

 

Electronic duo PARISI are true virtuosic players of ROLI instruments, whose performances have amazed and astounded audiences all over the world — and their latest rendition of Michael Jackson’s iconic pop hit “Billie Jean” is no exception.

The New Roland Cloud: ZENOLOGY Software Synthesizer, New Membership Plans, and Lifetime Keys

Earlier this month, Roland unveiled the biggest enhancements yet to the Roland Cloud platform. These include the introduction of the ZENOLOGY software synthesizer, new membership options, and the ability to buy Lifetime Keys to individual instruments.

Since its inception, Roland Cloud has grown into a collection of more than 50 instruments—including Roland legends like the TR-808, JUPITER-8, and JUNO-106—each giving users instantly inspiring, genre-defining sounds from the past, present, and future of music. 


Introducing the New ZENOLOGY Software Synthesizer 

 Roland’s ZEN-Core Synthesis System—found in the Roland FANTOM and JUPITER-X synthesizers, MC Series Grooveboxes, and the RD-88 Digital Piano—is now available for use in DAWs with the new ZENOLOGY Software Synthesizer, available only in Roland Cloud. Users can utilize the same sounds in both DAW and hardware instruments, create custom banks, and share with friends and collaborators. Anyone with an active Roland Account can access ZENOLOGY Lite.



New Membership Plans


Roland Cloud now offers three membership plans: Core, Pro, and Ultimate. Core ($29.99/year or $2.99/month) includes access to the ZENOLOGY Software Synthesizer and ZEN-Core sound packs.

Pro ($99.00/year or $9.99/month) gives unlimited access to the TR-808, D-50, and ZENOLOGY Pro (coming fall 2020). Pro also includes all Zen-Core Sound Packs, Wave and Model Expansions for software, plus Anthology, TERA, FLAVR, Drum Studio collections, and all software patches and patterns.

Ultimate ($199.00/year or $19.99/month) includes all Legendary and SRX collections plus unlimited access to all instruments and sounds. 


Lifetime Keys to Individual Roland Cloud Instruments 

Users can also purchase Lifetime Keys to Roland Cloud instruments like the TB-303, TR-909, JX-3P, and many others. These Lifetime Keys provide unrestricted access to a single Roland Cloud software instrument for as long as a Roland Account remains active. 

Experience Roland Cloud by downloading Roland Cloud Manager 2.5:  

Yamaha and Camelot Pro make playing live easier

LIVE PERFORMING IS NOW MORE FUN AND EASY

CROSS PLATFORM LIVE PERFORMANCE APPLICATION

Wondering how to connect and control your hardware and software instruments in one place? Want to remotely control your Yamaha synthesizers and quickly recall presets on stage? How about attaching a lead sheet or music score with your own notes to a set of sounds?

Camelot Pro and Yamaha have teamed up with special features for Yamaha Synth owners.

REGISTER AND GET CAMELOT PRO FOR MAC OS OR WINDOWS

Download your Camelot Pro copy now with a special offer for Yamaha Synth owners: try the full version FREE for three months with an option to purchase for 40% off.

The promo is valid from:

The promo is valid from:

October 1, 2019 to September 30, 2020..

Upgrade your live performance experience to the next level:

  • Build your live set list with ease
  • Manage your Yamaha instruments using smart maps (no programming skills required!)
  • Combine, layer and split software instruments with your Yamaha synths
  • Get rid of standard connection limits with Camelot Advanced MIDI routing
  • Attach music scores or chords to any scene

The real slick thing about the combination of the Yamaha synths and Camelot Pro is that it allows you to very easily integrate your hardware synths and VST/AU plugins for live performance.   The Yamaha synths connect to your computer via USB and integrate digital audio and MIDI.  So just connect your computer to your Yamaha synth and then your Yamaha synth to your sound system.  Camelot allows you to integrate your hardware and software in complex splits and layers and everything comes out the analog outputs of your Yamaha synth. 



Camelot Pro Key Features 


Camelot Pro Tutorial: The Definitive Guide


Camelot Pro Tutorial: MIDI Connections


Camelot Pro Tutorial: Managing Any MIDI Device


Yamaha Hardware List

Integrate VST?AU software instruments

Add song notation

Advanced MIDI Routing

Compatible with MAC/PC and iPAD


Don’t own a Yamaha Synth?  

 No problem,  Camelot Pro works with lots of synths. You can check the hardware list here. 

https://camelotpro.com/hardware-instruments/


Try it for free 

 There is even a free version of Camelot that you can download just for signing up for the Camelot Newsletter. 

FL Studio- MIDI Recording and Editing

Here are three new videos about how to use MIDI in FL Studio




20% Off Online Video Courses

This month get 20% off any TMA curriculum. Choose a monthly or an annual subscription and save even more! Supercharge your music production skills today.


...

Massive Online Courseware Library : MIDI Association : NonLinear Educating

Nonlinear Educating is an adaptive technology company dedicated to improving the way the world learns. The combination of our powerful and modular video-courseware production & distribution platform and our extensive library of industry leading training courses, has granted us the opportunity to empower a variety of partners from a multitude of industries. The foundationally modular approach to our application infrastructure enables us to rapidly customize instances of our platform to meet the specific needs of our partners. We are agile adaptive and are committed to developing the most efficient and robust video-learning platform on the internet.

Cubase 10 MIDI Recording and Editing

Here are three new videos about how to use MIDI in Cubase 10 




20% Off Online Video Courses

This month get 20% off any TMA curriculum. Choose a monthly or an annual subscription and save even more! Supercharge your music production skills today.


...

Massive Online Courseware Library : MIDI Association : NonLinear Educating

Nonlinear Educating is an adaptive technology company dedicated to improving the way the world learns. The combination of our powerful and modular video-courseware production & distribution platform and our extensive library of industry leading training courses, has granted us the opportunity to empower a variety of partners from a multitude of industries. The foundationally modular approach to our application infrastructure enables us to rapidly customize instances of our platform to meet the specific needs of our partners. We are agile adaptive and are committed to developing the most efficient and robust video-learning platform on the internet.

Cubasis 2.6 adds new MIDI features/Roli Blocks integration

Cubasis 2.6 Overview

Synonymous with ease of use, Cubasis 2 is a powerful and fully featured iOS-based music production system that pushes the creative envelope. Whether you’re capturing simple ideas or musical masterpieces, Cubasis comes with outstanding, touch-optimized tools for recording, editing, mixing and sharing your music with the world right away. With its second iteration, Cubasis boasts many additions such as real-time time-stretching and pitch-shifting, a studio-grade channel strip, pro- sounding effects, massive instrument refills, a refreshed MIDI Editor and many other great features. Put your hands on three onboard instruments, numerous loops and instrument sounds to creatively lift your music to perfection, together with the included mixer and effects. Once recorded, transfer it directly to Cubase or share your music directly with the world. 

But what’s really interesting is how many new MIDI features Cubasis 2.6 has. 

New features in Cubasis 2.6 

Audio Unit full-screen support*

Tweak sounds and parameters with utmost accuracy, using Cubasis’ super-sized full-screen support for Audio Unit instruments and effects plug-ins. Enjoy maximum productivity, creativity and flexibility, switching between the available screen sizes at lightning speed with only a few taps.

ROLI NOISE Seaboard and Drum Grid visualizer support*

Experience a new approach to making music, using ROLI’s free downloadable NOISE app within Cubasis. Create inspiring drum and melody parts through intuitive gestures, using the unique Seaboard and Drum Grid visualizers, now directly accessible via Cubasis’ Audio Unit full-screen mode.

MIDI CC support for compatible Audio Unit effect plug-ins*

Easily remote control your favorite compatible Audio Unit effect plug-ins via external controllers. No matter if you’re moving effect knobs via MIDI Learn or switching presets via program change — if your Audio Unit effects plug-in supports it, it can be done in Cubasis with great ease. 

*Requires iOS 11 


Check out what you can do with Cubasis 2.6 and Roli. 


Here is an in-depth tutorial on Cubasis and Roli Blocks 


...

‎Cubasis 2 on the App Store

‎Read reviews, compare customer ratings, see screenshots, and learn more about Cubasis 2. Download Cubasis 2 and enjoy it on your iPhone, iPad, and iPod touch.


...

Start | Steinberg

Get fascinated by the brand new features that Cubasis 2 comes with such as real-time time-stretch and pitch shift, a studio-grade channel strip, Spin FX, massive instrument refills and many more powerful features.

Audioswift-Your Trackpad As A MIDI Controller

Control · Improve · Create





Slider

Divide the trackpad in 1 to 4 virtual sliders and send CC or Pitch Bend MIDI messages. Add expressiveness to virtual instruments or automate plugins parameters easily with a trackpad. Edit photos faster in Lightroom.





XY

Use your trackpad as an XY pad to control several parameters at the same time, using one, two and three fingers configurations. A great MIDI tool for mobile producers and sound designers. 




Mixer

Control one or two faders at the same time using simple touches. Move the panning, set the send’s levels, use your trackpad as a jog wheel, and write automation in a quick and easy way. It’s currently supported in Logic Pro, Pro Tools, Ableton Live, Reaper, Cubase and Studio One.





Trigger

Make quick beats using your trackpad as trigger pads. Play audio clips by tapping your fingers. Up to three fingers can be used at the same time. 




Scale

Choose a tonic note and then select a scale. Slide your fingers from left to right to play notes in the selected key. Apply pressure to the trackpad and it will send aftertouch MIDI messages. (Aftertouch requires a trackpad with Force Touch). 


AudioSwift (US$24) requires macOS 10.11 or newer. Get 50% discount using coupon GOLDENFROG50 for a limited time. 

Telemidi – Creating music over The Internet in real-time

What is Telemidi?

A system of connecting two DAW environments over the internet, to achieve real-time musical `jamming’.
The product of Masters research by Matt Bray.


“…a musician’s behaviour at one location will be occurring at the other location in a near synchronous manner, and vice versa, thus allowing for a `jam’ like atmosphere to be mutually shared.”

Matt Bray (Telemidi creator)

Telemidi is an approach to Networked Music Performance (NMP) that enables musicians to co-create music in real-time by simultaneously exchanging MIDI data over The Internet.  Computer networking brings with it the factor of latency (a delay of data transfer), the prevalent obstacle within NMP‘s, especially when attempting to match the interaction of traditional performance ensembles.  Telemidi accommodates for latency via the use of numerous Latency Accepting Solutions (LAS – identified below) embedded within two linked DAW environments, to equip performers with the ability to interact in a dynamic, interactive and ongoing musical process (jamming).  This is achieved in part by employing RTP (Real Time Protocol) MIDI data transfer systems to deliver performance and control information over The Internet from one IP address to another in a direct P2P (peer to peer) fashion.  Once arriving at a given IP address, MIDI data is then routed into the complex DAW environment to control any number of devices, surfaces, commands and performance mechanisms.  Essentially, a musician’s behaviour at one location will be occurring at the other location in a near synchronous manner, and vice versa, thus allowing for a `jam’ like atmosphere to be mutually shared.  As seen in the video listed below, this infrastructure can be applied to generate all manner of musical actions and genres, whereby participants readily build and exchange musical ideas to support improvising and composing (`Comprovising’).  Telemidi is a true Telematic performance system. 


What is Telematic Performance?

Telematic music performance is a branch of Network Music Performance (NMP) and is a rapidly evolving, exciting field that brings multiple musicians and technologies into the same virtual space. Telematic Performance is the transfer of data and performance information over significant distances, achieved by the explicit use of technology. The more effective the transfer the greater the sense of Telepresence, the ability of a performer to “be” in the space of another performer.  Telematic performances first appeared when Wide Area Networking (WAN) options presented themselves for networked music ensembles via technologies such ISDN telephony, and options increased alongside the explosion of computer processing and networking developments that gave rise to The Internet.  Unfortunately in this global WAN environment, latency has stubbornly remained as a constant and seemingly unavoidable obstruction to real-time ensemble performance.

Telematic performance has been thoroughly explored by countless academic, commercial and hobby entities over the last four decades with limited successes. The musical performances have taken many forms throughout the exponential development of computing technologies, yet have been more-or-less restricted by latency at every turn.  For example, there is the inherent latency of a CPU within any given DAW, the additional processing loads of soft/hardware devices, the size and number of data packages generated in a performance, and the delivery of this data over The Internet which in turn presents issues regarding available bandwidth, data queuing, WiFi strength etc.. This is but one side of the engagement as we also have the DAW requirements of the reciprocating location, and of course the need for synchronous interplay between the two. Real-time NMPs suffer at the whim of network jitter, data delays and DAW operations.


How Telemidi Works

Telemidi works by exchanging MIDI data in a duplex fashion between the IP addresses of two performers, each of whom are running near-identical soft/hardware DAW environments.  A dovetailed MIDI channel allocation caters for their respective actions while avoiding feedback loops, in a system with the potential to deliver performance information to and from each location in near real-time (10-30ms).

To achieve this musical performance over The Internet, the Telemidi process employed:

1 – Hardware – a combination of control devices

2 – Softwaretwo near-identical Ableton Live sets

3Latency Accepting Solutions (LAS) – ten examples

4 – RTP MIDI facilitating the delivery of MIDI data to a WAN.  

Click on the tabs below for a summary of items used at each node location during the research stage of the Telemidi research (for more information and to download the Masters thesis go to www.telemidi.org): 

 Below is a list of hardware used at each location in the Telemidi research:

Lap-top Computers:  + Mac and Windows computers used, demonstrating Telemidi accessibility.


Novation SL Mk II

Novation SL Mk II MIDI controller keyboard


+ High capacity for customised MIDI routing (both control and performance data)

+ Traditional musical interface (keyboard)


Novation LaunchPad Pro

Novation LaunchPad Pro


+ Native integration with Ableton Live

+ Contemporary `Grid-based’ composition process 

 Software

 LAS

Ableton Live 

Near-identical Live sets (duplex architecture)
7 pre-composed songs (each split into four sections, A, B, C & D)
54 additional percussion loop patterns
12 x Synth Instruments (Native and 3rdparty)
Synths: 4 each of Bass/Harmony/Lead
16 DSP effects processers (with 2 or more mapped parameters)
286 interleaved MIDI mappings within each Live set
13 of 16 MIDI Channels used for shared performance and control data
Tempo variation control
Volume & start/stop control for each voice (Bass, Harmony & Melody)
Record and Loop capacity for each voice (Bass, Harmony & Melody)

LATENCY ACCEPTING SOLUTIONS (LAS):

The following processes adapt to and overcoming (cumulatively) the obstacle of latency.  They are ranked in order of efficiency from 1 (most efficient) to 10 (least efficient).

LATENCY ACCEPTING SOLUTION JUSTIFICATION
1 – One Bar Quantisation All pre-composed, percussive and recorded loops are set to trigger upon a one bar quantization routine, allowing time (2000ms @ 120bpm) to accommodate for network latency between song structure changes (most commonly occurring on a 4 to 8 bar basis).
2 – P2P (Peer ) Network Connection: Direct delivery of MIDI data from one IP address to the other. A simple direct delivery. No third party `browser-based’ servers used to calibrate message timing.
3 – Master Slave Relationship:  One node (Alpha) was allocated the role of Master and the other (Beta) the role of slave, allowing for consistent, shared tempo and a self-correcting tempo alignment following any network interference.
4 – Pulse-based music (EDM) as chosen genre for performance:

A genre without reliance on a strict scored format, rather a simple and repetitive pulse.
5 – Floating Progression (manner of Comprovising ideas) Each performer initiates an idea or motif, the other responds accordingly and vice-versa (jamming), any artefacts of latency only play into this process.
6 – 16thNote Record Quantize

Inbuilt Ableton function ensuring any recorded notes quantized to the grid.
7 – MIDI Quantize

3rdparty Max4Live device (16th note) puts incoming WAN MIDI onto the grid of the receiving DAW.
8 – Manual Incremental Tempo Decrease In the event of critical latency interference, tempo can be reduced incrementally, thus extending the time between each new bar and granting time for the clearance of latency issues.
9 – Kick drum (bar length loops) During a period of critical latency interference, a single bar loop of ¼ note kick drum events is triggered to maintain the “genre”.
10 – Stop Buttons During any period of critical latency interference, each voice (beats, percussion, bass, harmony or melody) can be stopped individually to reduce the musical texture, or to stop harmonic dissonance and stuck notes.

RTP MIDI

+ MacOS – AppleMIDI accessed through `Audio MIDI Setup’

+ Windows – rtpMIDI software used (created by Tobias Erichsen)

Success of Performance

Two performances were undertaken in the Telemidi research, the first with each performer 7.5km (4.6 mi) apart, and the second 2,730km (1,696 mi) apart.  Both were recorded and then analysed in detail (see video below), whereby aspects of performance parameters and methods were identified alongside several fundamental principles of Telematic performance.  A stream of audio is generated from each node and each has been analysed in the video to identify the interplay between the two musicians, highlighting any variations in the music created and to recognize artefacts of network performance.  It was noted that the music generated at each node was strikingly similar, although subtle variations in the rhythmic phrasing of bass, harmony and melody were common.

The Telemidi system ably accommodates all but the most obtrusive latency yet provides each musician with the capacity to co-create and Comprovise music in real-time across significant geographic distances.  These performances showed constant interplay and the exchange of musical ideas, as can be seen in the 16 minute analysis video below, leaving the door open for many exciting possibilities in the future.


16min Video Analysis


Future Plans

The principles of Telemidi were the focus of Matt Bray in his 2017 Masters research.  Now the Telemidi process has been proven to function, the landscape is open to allow for musicians to create and interact with each other in real-time scenarios regardless of their geographic locations.

The next steps are to:

+ Recruit keen MIDI-philes from around the globe to share and exchange knowledge in regards to the potentials of the Telemidi process (if this is you, please visit www.telemidi.org and leave a message)

+ Identify the most stable, low latency connections to The Internet available, to begin test performances across greater geographic regions

+ Refine and curate the infrastructure to suit various genres (from EDM to contemporary, also including live vocalists/musicians at each location)

+ Produce and promote simultaneous live performance events in capital cities, first nationally (Australia) and then internationally.

If you are at all interested in contributing to, or participating in the Telemidi process, please contact me, Matt Bray at www.telemidi.org, I’d love to hear from you and see what possibilities are achievable. 

Thanks for checking out Telemidi!!

Matt Bray


MIDI and the Surface Pen

Pens and stylus’ have been employed as computer interaction devices for quite some time now. Most commonly they were used along with peripheral graphics tablets to give a more natural flow to the artist or designer than a mouse could muster. With the release of the Surface Pro hybrid laptop by Microsoft in 2012 they brought a digital pen along to party that could work directly on the screen. It was intended to bridge the gap between the demands of desktop software and the tablet touch screen form factor. In a mouse and track-pad free computing environment how better to access the finer details that your thick fingertips couldn’t manage. 

The advantages for the artist become quickly apparent. As the Surface Pro has evolved the graphical power has gotten to the point where it’s a completely competent sketching, drawing and design platform. But there’s another group of artists for whom the digital pen has an awful lot of potential, and that’s the musician. 

This is probably most joyously demonstrated by the Windows 10 app Staffpad. Staffpad takes the idea of writing music completely literally and presents you with a blank sheet of manuscript paper and asks you to start writing. Combining the digital pen with hand writing recognition Staffpad is able to interpret your hand written notes into digital MIDI information directly onto a score. It can then be played back through a virtual orchestra. It’s a stunning piece of work and remarkably fluid and creative to use. 

Most of us approach music creation in a more sequenced format. The pen has a lot to offer here as well. Entering notes into a piano roll immediately comes to mind, as does the editing of notes, the trimming of clips or moving blocks in an arrangement. Consider drawing in track automation, with a pen rather than a mouse. How much more fluid and natural could that be?

In many ways the pen feels like it’s simply replacing the actions of a mouse – but it doesn’t quite work like that. The Surface Pen works through a combination of technology in the pen and a layer of corresponding technology on the screen. It’s not just touch-screen technology, you can’t take the Surface Pen and use it on another brand of screen, it will only work on Surface products. While that affords the technology a great deal of power it can also trip up software that isn’t able to interpret the technology properly. In many cases the pen works just like a mouse replacement, but in others it can cause weird or no behaviour at all.

When PreSonus first released their new touch-enabled version 3 of Studio One the reaction to the Surface Pen when running on the Surface Pro 3 was to get quickly confused and then lock up. In Cakewalk Sonar, again touch-enabled, there were areas in the software that completely refused to acknowledge the presence of a pen on the screen. Both of those DAWs have far better support for it now. Ableton Live appeared to work with both touch and the pen without any trouble except that when grabbing a fader or knob control the value would leap between the maximum and minimum making it impossible to set it accurately. Adding support for “AbsoluteMouseMode” in a preferences file cured that particular oddity. 

Where it’s been most unflinchingly successful is within Steinberg’s Cubase and Avid’s Pro Tools neither of which has expressed any interest in touch or pen interaction – but it simply works anyway. From entering and editing notes to drawing in long wiggly lines of modulation and automation the pen becomes a very expressive tool.

However, for the full immersion that the pen can offer, this tends to mean eschewing the keyboard. When you are leaned in, as I mentioned earlier, having to then pull back to use a keyboard shortcut can be rather jarring and interrupting to your workflow. There’s a certain amount you can do with the on-screen virtual keyboard but it can completely cover what it is you’re trying to edit, so it’s not ideal. This highlights what I see as being the current flaw in the Surface Pen workflow – the lack of a relevant, customisable toolbar.

When editing notes or an arrangement with the pen the ability to do simple tasks such as copy and paste become cumbersome. You can evoke a right-click with the squeeze of a button and then select these task from the list, or you can glide through the menu system but neither of these options are as elegant as a simple Ctrl-C and Ctrl-V. You can quickly extend that to other actions – opening the editor, or the mixer, duplicating, setting loop points there’s a whole raft of commands that are hidden away behind menus or keyboard shortcuts that are annoying to reach with just the pen for input. Adding a simple macro toolbar with user definable keyboard shortcuts would greatly enhance the pen’s workflow. It’s possible to do this with third party applications but it really needs support at the OS level.

This is something Dell have considered with their Canvas touch-screen and digital pen system. They have incorporated floating “palettes” that are little toolbars to access useful keyboard shortcuts. Some DAWs, such as Bitwig Studio and PreSonus Studio One, have fingerable toolbars that can perform a similar function – but something more global would be helpful.

With the release of the Surface Pro (2017) Microsoft have introduced an improved Surface Pen with 4 times the resolution of the previous version. Although more relevant to the artist who draws, it’s interesting to see pen support improving in many DAWs. It’s usefulness is becoming more apparent and if you consider the Dell Canvas and the iPad Pro Pencil, along with the development of the Surface into the larger Surface Studio and laptop form factors, it’s also becoming more widespread.

At the time of writing only one DAW manufacturer has stepped up to push the digital pen into more than just emulating mouse tasks. Bitwig Studio has some special MPE (Multidimensional Polyphony Expression) functionality that allows you to map the pen pressure to parameters on MPE compatible virtual instruments. More on that in another article, but hopefully more creative uses will emerge as this gains popularity.

The digital pen offers many creative opportunities. It unhinges you from the mouse/keyboard paradigm and pushes you into a more natural and fluid way of working. It lacks support in some software and there’s some work to be done on optimising the workflow by combining it with a toolbar, but it offers a different and creative approach to musical computer interaction.

Here’s a video of me reviewing the Microsoft Surface Book for music production which has a lot of pen use and examples in it. There’s plenty more on the YouTube channel:

5 MIDI Quantization Tips

Make quantization work for you, not against you 

Quantization is the process of moving MIDI data (usually notes, but also potentially other data) that’s out of time to a rhythmic “grid.” For example, if a kick drum is slightly behind the beat, quantization can move it right on the beat. Quantization was controversial enough when it was limited to MIDI, but now that you can quantize audio, it’s even more of an issue. Although some genres of music—like electro and other EDM variants—work well with quantization, excessive quantization can compromise a piece of music’s human feel. 

Some people take a “holier than thou” approach to quantization by saying it’s for musical morons who lack the chops to get something right in the first place. These people, of course, never use quantization…well, at least while no one’s looking. But quantization has its place; it’s the ticket to ultra-tight grooves, and a way to let you keep a first and inspired take, instead of having to play a part over and over again to get it right—and lose the human feel by beating a part to death. Like any tool, quantization can be used or misused, so let’s concentrate on how to make quantization work for you—and avoid giving an overly rigid, non-musical quality to your work. 

TRUST YOUR FEELINGS, LUKE 

Computers are terrible music critics. Forcing music to fit the rhythmic criteria established by a machine is silly—it’s real people, with real emotions, who make and listen to music. To a computer, having every note hit exactly on the beat may be desirable, but that’s not the way humans work. 

There’s a fine line between “making a mistake” and “bending the rhythm to your will.” Quantization removes that fine line. Yes, it gets rid of the mistakes, but it also gets rid of the nuances. 

When sequencers first appeared, musicians would often compare the quantized and non-quantized versions of their playing. Invariably, after hearing the quantized version, the reaction would be a crestfallen “gee, I didn’t realize my timing was that bad.” But in many cases, the human was right, not the machine. I’ve played some solo lines were notes were off as much as 50 milliseconds from the beat, yet they sounded right. Tip #1: You dance; a computer doesn’t. You are therefore much more qualified than a computer to determine what rhythm sounds right. 

WHY QUANTIZATION SHOULD BE THE LAST THING YOU DO 

Some people quantize a track as soon as they’ve finished playing it. Don’t! In analyzing unquantized music, you’ll often find that every instrument of every track will tend to rush or lag the beat together. In other words, suppose you either consciously or unconsciously rush the tempo by playing the snare a bit ahead of the beat. As you record subsequent overdubs, these will be referenced to the offset snare, creating a unified feeling of rushing the tempo. If you quantize the snare part immediately after playing, then you will play to the quantized part, which will change the feel. 

Another possible trap occurs if you play several unquantized parts and find that some sound “off.” The expected solution would be to quantize the parts to the beat, yet the “wrong” parts may not be off compared to the absolute beat, but to a part that was purposely rushed or lagged. In the example given above of a slightly rushed snare part, you’d want to quantize your parts in relation to the snare, not a fixed beat. If you quantize to the beat the rhythm will sound even more off, because some parts will be off with respect to absolute timing, while other parts will be off with respect to the relative timing of the snare hit. At this point, most musicians mistakenly quantize everything to the beat, destroying the feel of the piece. Tip #2: Don’t quantize until lots of parts are down and the relative—not absolute—rhythm of the piece has been established. 

SELECTIVE QUANTIZATION 

Often only a few parts of a track will need quantization, yet for convenience musicians tend to quantize an entire track, reasoning that it will fix the parts that sound wrong and not affect the parts that sound right. However, the parts that sound right may be consistent to a relative rhythm, not an absolute one. 

The best approach is to go through a piece, a few measures at a time, and quantize only those parts that are clearly in need of quantization—in other words, they sound wrong. Very often, what’s needed is not quantization per se but merely shifting an offending note’s start time. Look at the other tracks and see if notes in that particular part of the tune tend to lead or lag the beat, and shift the start time accordingly. Tip #3: If it ain’t broke, don’t fix it. Quantize only the notes that are off enough to sound wrong. 

BELLS AND WHISTLES

Modern-day quantization tools, whether for MIDI or audio, offer many options that make quantization more effective. One of the most useful is quantization strength, which moves a note closer to the absolute beat by a particular percentage. For example, if a note falls 10 mlliseconds ahead of the beat, quantizing to 50% strength would place it 5 milliseconds ahead of the beat. This smooths out gross timing errors while retaining some of the original part’s feel (Fig. 1)

Fig. 1: The upper window (from Cakewalk SONAR) shows standard Quantization options; note that Strength is set to 80%, and there’s a bit of Swing. The lower window handles Groove Quantization, which can apply different feels by choosing a “groove” from a menu.

Some programs offer “groove templates” (where you can set up a relative rhythm to which parts are quantized), or the option to quantize notes in one track to the notes in another track (which is great for locking bass and drum parts together). Tip #4: Study your recording software’s manual and learn how to use the more esoteric quantization options.

EXPERIMENTS IN QUANTIZATION STRENGTH

Here’s an experiment I like to conduct during sequencing seminars to get the point across about quantization strength.

First, record an unquantized and somewhat sloppy drum part on one track. It should be obvious that the timing is off.

Then copy it to another track, quantize it, and play just that track back; it should be obvious that the timing has been corrected. Then copy the original track again but quantize it to a certain strength—say, 50%. It will probably still sound unquantized. Now try increasing the strength percentage; at some point (typically in the 70% to 90% range), you’ll perceive it as quantized because it sounds right. Finally, play back that track along with the one quantized to 100% strength and check out the timing differences, as evidenced by lots of slapback echoes. If you now play the 100% strength track by itself, it will sound dull and artificial compared to the one quantized at a lesser strength. Tip #5: Correct rhythm is in the ear of the beholder, and a totally quantized track never seems to win out over a track quantized to a percentage of total quantization.

Yes, quantization is a useful tool. But don’t use it indiscriminately, or your music may end up sounding mechanical—which is not a good thing unless, of course, you want it to sound mechanical!

How to Find MIDI Sequencer “Gotchas”

Fix those little “gotchas” before they make it into the final mix

by Craig Anderton

MIDI sequencing is wonderful, but it’s not perfect—and sometimes, you’ll be sandbagged by problems like false triggers (e.g., what happens when you brush against a key accidentally), having two different notes land on the same beat when quantized, voice-stealing that cuts off notes abruptly, and the like. These glitches may not be obvious when other instruments are playing, but they nonetheless can muddy up a piece or even mess up the rhythm. Just as you’d “proof” your writing, it’s a good idea to “proof” sequenced tracks.

Begin by listening to each track in isolation; this reveals flaws more readily than listening to several tracks simultaneously. Headphones can also help, as they may reveal details you’d miss over speakers. As you listen, also check for voice-stealing problems caused by multi-timbral soft synths running out of voices. Sometimes if notes are cut off, merely changing note durations to prevent overlap—or deleting one note from a chord—will solve the problem. But you may also need to dig deeper into some other issues, such as . . .

NOTES WITH ABNORMALLY LOW VELOCITIES OR DURATIONS

Even if you can’t hear these notes, they still use up voices. They’re easy to find in an event list editor, but if you’re in a hurry, do a global “remove every note with a velocity of less than X” (or for duration, “with a note length less than X ticks”) using a function like Cakewalk Sonar’s DeGlitch option (Fig. 1).

Fig. 1: Sonar’s DeGlitch function is deleting all notes with velocities under 10 and durations under 10 milliseconds.

Note that most MIDI guitar parts benefit greatly from a quick cleanup of notes with low velocities or durations.

UNWANTED AFTERTOUCH (CHANNEL PRESSURE) DATA

If your master controller generates aftertouch (pressure) but a patch isn’t programmed to use it, you’ll be recording lots of data that serves no useful purpose. When driving hardware synths, this can create timing issues and there may even be negative effects with soft synths if you switch from a sound that doesn’t recognize aftertouch to one that does.

Note that there are two types of aftertouch—channel aftertouch, which generates one message that correlates to all notes being pressed, and polyphonic aftertouch, which generates individual messages for each note being pressed. The latter sends a lot of data down the MIDI stream, but as there are few keyboard controllers with polyphonic aftertouch, it’s unlikely you’ll encounter this problem.

Steinberg Cubase’s Logical Editor (Fig. 2) is designed for removing specific types of data, and one useful application is removing unneeded aftertouch data.

Fig. 2: In this basic application of Cubase’s Logical Editor, all aftertouch data is being removed.

Note that many recording programs disable aftertouch recording as the default, but if you enable it at some point, it may stay enabled until you disable it again.

OVERLY WIDE DYNAMIC VARIATIONS

This can be a particular problem with drum parts played from a keyboard—for example, some all-important kick drum hits may be much lower than others. There are two fixes: Edit individual notes (accurate, but time-consuming), or use a MIDI edit command that sets a minimum or maximum velocity level, like the one from Sony Acid Pro (Fig. 3). With pop music drum parts, I often limit the minimum velocity to around 60 or 70.

Fig. 3: Sony’s Acid Pro makes it easy to restrict MIDI dynamics to a particular range of velocity values.

DOUBLED NOTES

If you “bounce” a key (or drum pad, for that matter) when playing a note, two triggers for the same note can end up close to each other. This is also very common with MIDI guitar. Quantization forces these notes to hit on the same beat, using up an extra voice and producing a flanged/delayed sound. Listening to a track in isolation usually reveals these flanged notes; erase one (if two notes hit on the same beat, I generally erase the one with the lower velocity value). Some programs offer an edit function that deletes duplicates automatically, such as Avid Pro Tools’ Delete Duplicate Notes function (Fig. 4).

Fig. 4: Pro Tools has a menu item dedicated specifically to eliminating duplicate MIDI notes.


NOTES OVERLAP WITH SINGLE-NOTE LINES

This applies mostly to bass and wind instruments. In theory, with single-note lines you want one note to end before another begins. Even slight overlaps make the part sound more mushy (bass in particular loses “crispness”) but what’s worse, two voices will briefly play where only one is needed, causing voice-stealing problems. Some programs let you fix overlaps as a Note Duration editing option.

However note that with legato mode, you do want notes to overlap. With this mode, a note transitions smoothly into the next note, without re-triggering an envelope when the next note occurs. Thus in a series of legato notes, the envelope attack occurs only for the first note of the series. If the notes overlap without legato mode selected, then you’ll hear separate articulations for each note. With an instrument like bass, legato mode can simulate sliding from one fret to another to change pitch without re-picking the note.

Craig Anderton is an Executive Vice-President at Gibson Brands, and Editorial Director of Harmony Central. He has played on, mixed, or produced over 20 major label releases (as well as mastered over a hundred tracks for various musicians), and written over a thousand articles for magazines like Guitar Player, Keyboard, Sound on Sound (UK), and Sound + Recording (Germany). He has also lectured on technology and the arts in 38 states, 10 countries, and three languages. This article is reprinted with the express written permission of HarmonyCentral.