MIDI Orchestration

Whether modern composers are aware of it or not, an unspoken clash exists between music technology, and the fundamental teachings of composing. Modern technology is actually not the cause of the disparity, however it does reinforce and amplify a long standing issue. The issue was eloquently stated by Rimsky-Korsakov over a century ago (though it probably existed before him) when he noticed that novice composers used dynamics to artificially enhance an instruments voice within a composition. A common occurrence he noticed was, novice composers would write forte in music for a register of  an instrument that is not as sonorous as it’s other registers to get it to stand out in a passage. Rather than re-writing the passage so that the line naturally stands out or even assigning a line to another instrument that could properly carry it, the composer was essentially telling the musician to just play the line louder. Obviously this made for a poorly written, and probably poorly heard piece of music.

Unfortunately, the clash continues today as virtual instrument manufacturers and many composers tend to mix their samples/compositions so that they’re explosive without any degree of subtlety. I believe this to be one of the reasons why the composing fundamentalists disavow computer sequenced compositions all together, and to a good degree, I agree with them. However, I think that there can be a happy overlap between the two courts, and I think focusing on and talking about MIDI orchestration is the ice breaker.

MIDI (Musical, instrument,digital interface, as most of you probably know) is a protocol written to represent, access, and execute commands from one digital instrument using MIDI, to another. It is not unlike the ASCII keyboard you use to type characters into your computer. The keyboard is just telling your computer that when you press a certain button on your keyboard, that it seeks out that character within the computers program, and places it into whatever application you’re using. This is exactly what’s happening with MIDI.

No actual audio information is ever exchanged through MIDI. Whenever you press a note on a MIDI keyboard, it’s accessing a receiving program (like a D.A.W.) to trigger the sound that’s stored (either analog or digital) that correlates to the note you played. This is amazing because, it essentially allows you to do anything you want to the virtual instrument you’re accessing. This is also awful, because it essentially allows you to do anything you want to the virtual instrument you’re accessing.

It may be apparent, but orchestrating for live instruments is really about writing to the strength of the instrument and understanding the instruments limitations.*(See Foot Notes) MIDI allows you to throw that all out the window, which is why many sequenced compositions using high end samples trying to compose acoustic music, sound fake or extremely synthetic.

In reality, the emulation/sampling of  any live instrument (whether it’s a guitar or a violin) as a virtual instrument, is not 100% convincing. It might be in the future, but as of now, I have only heard a handful of sequenced compositions that’s comprised of all virtual instruments, that sound amazing.

Obviously, most musicians I know lack the budget, experience, and resources to record mostly live musicians. However, trying to emulate live instruments using virtual instruments is a lot more involved than many composers take the time to do. This means that if you want to be taken seriously, and make money composing  music with sequenced instruments, you have to learn how to orchestrate MIDI convincingly.

I am going to focus on some of the major missteps I hear in sequenced music that I think can be easily fixed by anyone with entry level music knowledge. (Although, it does make for much better productions the more you know about fundamental orchestration)

Within any D.A.W. there is what I call a “MIDI Matrix”. The MIDI Matrix has a different name from every single piece of recording software, but it all does essentially the same thing. It is a grid representation of sheet music. It tells you where a note lies within a measure, what duration it is, what pitch it is, as well as how loud/accented the note is. These are just the main features of the matrix as it also allows you to manipulate a virtual sound or midi triggered instrument in mind numbingly different ways. (We will not be going too deep into MIDI’s functionality.)

I’d like to start with a frequently occurring issue that I call the machine gun effect. What I mean by that, is often times I’ll hear  phrases within a sequenced piece that have fast repeated notes that sound more like a machine gun unloading, rather than an instrument playing. This is often caused by a composer copying and pasting the same note at the same velocity over and over again, or it’s an inadequate virtual sample that has no velocity variation.

I haven’t tried every single D.A.W. that exists, but generally how loud a MIDI note will play and how accented it is within a D.A.W.,  is represented in the Matrix by color. Red being the absolute max in volume and accent, and purple/blue being quiet and subtle.

When there is no color variation from MIDI note to MIDI note, you are telling the D.A.W. to play a MIDI note robotically. As any musician is aware, no note or performance of a piece of music from a live player will be reproduced the exact same way exactly the same time. You need to account for this variation in your sequenced composition.

Here’s what a MIDI Matrix may look like when a note has been repeated over and over again at the same velocity.MIDI Orchestration Pic 1.png

The easiest way to create a performance variation is by playing and recording the passage into the sequencer with a MIDI keyboard. (For anyone out there that does not possess piano skills for MIDI recording, the company Sonuus created an audio to midi converter. It will attach to any instrument and convert your audio to midi notes.) Also if your composition is set to a challenging BPM, just record the passage at a slower tempo and then bring it back to speed.

Here is the same passage recorded into the  D.A.W. instead of a note plotted or pasted repeatedly. The velocities are varied and will produce a “more” human sound.MIDI Orchestration Pic 2.png

If you’re still getting a machine gun sound then you may be dealing with a poorly recorded and encoded sample. If money is an issue (and it usually is) one way to combat this, is to create 2 virtual instrument tracks using 2 different virtual instruments that have the same articulation. For example, many string libraries have a Violin I & Violin II samples recorded.  Let’s say you have a spicatto passage that you’ve written. You would load Violin I spicatto into one track and then Violin II spicatto into the other. Then you would copy the recorded MIDI and paste it onto both Violin I and to Violin II.

Your last step would be to delete the off beats on Violin I and the down beats of Violin II. You are essentially trading back and forth between the two tracks to give a more realistically performed sound. Of course you’ll need to make sure that the volumes are balanced and then “bus” each virtual instrument to 1 track so that you can adjust the volumes of the 2 different instruments simultaneously. This trick is not just confined to orchestral instruments. It works extremely well on electric bass, guitars, and works the best with woodwinds. Experiment with different articulations from different instruments.

The next major issue I hear on sequenced compositions is, no articulation variation from a single instrument. When a track is sequenced using only (let’s say) a legato patch when sections clearly need staccato. Or a crescendo or sforzando is created by automating the volume of the instrument. (They’re are many more not listed here) Sequenced compositions are already at a disadvantage by being synthesized, and by not putting in the detailed articulations, even an untrained ear will perceive your music as either inexperienced, poor, or lazy.

This is easily fixed, albeit labor intensive, by loading the necessary instruments articulation needed for the piece on their own track. Many sample libraries offer a function called key-switching to address the need for multiple articulations for a sequenced composition.

Essentially, a dedicated section of the virtual instrument, out of the instruments register/range, can be triggered to switch between the most common articulations an instrument has. It is done by placing a MIDI note in that section that corresponds to the each type of articulation you want at the moment in the piece you’d like the articulation to occur. When you play back the piece it will switch automatically between types. If you are using a trumpet, and would like to go from a “Sustained” sound to a “Stacatto” sound, you would need to place 2 MIDI notes in the section of the MIDI matrix that reflects those articulations from the virtual instrument library you are using.


Although there is a time and place for key-switching, I actually do not think key-switching needs to be used too often. Unless you have composed a piece of music that makes use of many types of articulations, it’s best to create different individual tracks for each of an instruments articulation utilized.

A virtual instrument that contains key-switches will most likely load a bunch of articulations that you won’t use and eat up computer memory that could be used for other instruments you are using. Some virtual instrument libraries allow you to dump the samples you don’t want or need, but I find that to be time intensive de-selecting them. It’s not worth having to do that for every single instrument you’re composing for, within the piece.

Also by default, articulations inherently produce different dynamic qualities. Moreover, it will be way more difficult to mix a single track of an instruments many articulations, when you could easily adjust their volumes and EQ needs on separate tracks.

Lastly, many sequencer programs trigger kew-swtich articulations with the playhead. This creates a quirk where, if you are jumping around in your composition by clicking on measures directly, the last articulation you were on will still play in the next section you’ve jumped to, regardless of what articulation you’ve programmed for that section.

The playhead has to play over the MIDI trigger note of every section you’ve placed one in, for it to be triggered. Basically if you need to jump around in your composition, you are forced to jump to each key-switch trigger point of the composition, in order to hear that sections articulation properly. Again, there’s a time and place for key-switches, but most pieces really only use 4 different type of articulations, and setting up individual tracks for each one tends to be the easiest.

After focusing upon some of the common issues of MIDI sequences that can be fixed without money begin spent, I will now make mention of one that requires money to fix. Special articulations like portato or glissandos, or pinch harmonics or octave runs will not sound realistic , with todays current technology,  if you use a modulation wheel  or try to manually perform it into the software with a MIDI controller. These types of ornamentation have so much information, subtlety, and nuance that it’s best to have a pre-recorded sample of that function triggered. For example, if you want a harp glissando, rather than running your finger up and down a keyboard while recording, to mimic one, a virtual sample recording of a virtuoso playing it, will be far more convincing than you. (Unless of course you are a virtuostic harp musician and can record it live)

Obviously purchasing a high end sample library could solve this problem. However not all high end libraries will have glissandos, portatos, crescendos, etc.. You have to research and find the ones that encompass all of these features, for the price you’re willing to spend.

Here is the last comment I’d like to make about MIDI orchestration. If you are trying to emulate “LIVE” acoustic or electric instruments using MIDI Virtual Instruments, do not compose parts that even a virtuosic player couldn’t play in real life. If someone couldn’t play it in real life, then that will stick out in your composition. If you are unsure if what you’ve composed is possible, ask a proficient musician of that instrument!

I tried to touch upon just a few of the major MIDI orchestration hiccups I come across regularly. Most of the time the issues can be solved with effort. The more detailed, ornate and “Humanistic” you try to make your sequenced composition to sound, the more likely you are to attract a buyer. Even if you’re not interested in selling your music, it really doesn’t make sense to half heartedly create art. That can be saved for the things you truly don’t care about!

I’ll end this post with a composition of mine that is composed with 100% sequenced instruments, yet I feel I did a good job fooling people into thinking some of it wasn’t.


I’m a big fan of cryptocurrencies. Most specifically Stellar Lumens (XLM). If you’re a fan of Stellar too, and found my information to be helpful, then let me know by sending some lumens.


USERNAME -> TarterSauce*lobstr.co




*EDM, Rap, and Aleatoric/Atonal music were born or influenced by electronic music technology. These genres embrace the synthetic capabilities of technology, and it’s completely appropriate to manipulate sounds in a completely unnatural way for a piece of music.


Leave a Reply

Please log in using one of these methods to post your comment:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s