Article about midi. Published in Atari Magasinet #1, 1999 (swedish magazine). Translated into english by the author.


Midi - Miracle or Nightmare?

In this article, I'll explain some basic issues about midi - how it works and problems that might arise. Maybe it's mostly for beginners, but I believe that maybe also more experienced people can find interesting things here...

For starters, it's no secret that Atari's great success among musicians was due to the built-in midi interface, which had to be bought as an expensive add-on for all other computers. Since midi was available without any extra costs, the programmers saw a chance to sell many sequencer programs to Atari users and a fast development took place, which eventually lead to the highlights; Cubase Audio and Logic Audio for the Falcon. For a while, the development was actually fastest on the Atari platform, since it was a very big market. Today it's another situation the opposite. The prices for midi interfaces have fallen to a more reasonable level and the development has virtually exploded because of the fantastic boom of processing power, where the speed of the CPU is estimated to be doubled in about 9 months... Because of the fact that the Atari machines couldn't keep up with this extreme development and follow the developers' and the markets' demands, almost all development of commercial software for Atari is discontinued :(

What is this magical mystery word "midi"? What's it for?
Midi is an abbreviation of "Musical Instrument Digital Interface" and it was developed to be a standard for how synths, drum machines, effects and computers should "talk" to each other. Eventually came the bright idea to make lighting equipment and mixers midi-aware too.
Most of the synth developers used their own standards for the sound placement in the memory banks for a long time - and sometimes they also seemed to be somewhat random - and all developers naturally had their own view about what was correct... When using the same synths from year to year, this was no issue, but as soon as you wanted to exchange a synth, you were in trouble finding the correct type of sounds to replace the old synth's sounds with. Besides, it caused quite a lot of trouble for those who wanted to publish their music on the Web or who made music for karaoke, travelling musicians and entertainers etc. This lead to that a couple of wise men somewhere (probably in Japan) started to discuss the problem and came up with the solution that's called General Midi (GM). GM defines the sound banks so that a piano always becomes a piano - maybe with a slightly different sound, but although a piano - regardless of which synth you use for the playback. There are variations of the sounds if you change to other banks and some sound modules have up to 1000 sounds or even more - something that allows for much variation but still is standardized, so that the one who listens to the music in another midi system always can be sure that it sounds close to what has been intended by the composer - at least with the same type of instruments intended. Many synth modules of various sound quality have seen the light of day, but today, almost all have a GM-mode that's available if they're not GM-compatible as a default. Roland invented an own system for this, so their synths are not only GM-compatible - they also support the Roland variation General Standard (GS) - isn't it wonderful with standards...? ;)

If you have a synth with midi, the computer (with the correct software of course) can receive information about which key is pressed, how hard it's pressed and a lot of other things. You let the computer record what's played in a sequencer program. It can replay this at the same time that you record the next instrument. Now you have a whole orchestra at your disposal, which never complains or interferes in how the song should be played, never comes too late, never looses the tempo or forgets the order of the song's parts - and all other advantages you can come to think of. The disadvantage is naturally that sometimes, you can miss the feedback you can get from other band members, which in many cases can be very stimulating for the development (it's easy to become too much of a lonely wolf if you don't get other's opinions from time to time) - but in some cases only are annoying... Everything can be recorded by the same keyboard, but in the sequencer program, it's directed to other sound sources , like synth modules, drum machines etc. A very important feature when working with midi is "Local Off", which disconnects the keyboard from the built-in sound source in the synth - and lets you play any sound source in the midi rig. If you don't use "Local Off", the sound in the synth will be mixed with the sound of the source you want to use while you're playing live or recording, but when you play back the recording, only the wanted sound source will sound, making it a bit confusing since it sounds different from when you recorded... If you let the sequencer send the outgoing midi back to the synth you're using (that is, if you forget to switch to "Local Off"), the sound will get doubled with a resulting flanger or chorus effect - again only as long as you record or play live, and not when you replay what you recorded. This can easily cause some confusion...

Midi is used on just about every record production today, to a large or small extent, because it's tight without much need for rehersal and because it enables composers to do much work at home, to get away from the very expensive studio time that would be needed otherwise. Besides, you can do things in a midi environment that would have been impossible to do by hand. You use midi for managing mixers, to avoid becoming a nervous wreck after having to change numerous things during a mixdown, keeping everything in the head - at the same time as you keep track of levels and hear if the new mix really is as good as the previous one... In such cases, it's good to be able to use midi automation as a helping hand.

Midi does have some limitations, which can be important to keep in mind.
You have 16 midi channels to work with - that is, you can play a maximum of 16 different sound sources at the same time. This may seem like a lot, but unfortunately it becomes limited after working for a while. Think of the following situation: Drums on one channel, Bass on the next, Piano on one, Strings on one, Brass on 4-5 channels to make it sound "real", extra drum sounds from a sampler on one channel, effects sounds on another and Marimba on one. How many was that? 13 channels - No problem... How about adding another piano channel to give the song more life and another String channel to make the strings "wider" by panning the two apart. Now we only have one channel left and there's that groovy pluck guitar I need - uh-oh no more channels. Too bad since I wanted to add a solo synth as well...

Another aspect of this 16-channel limit is that it limits your overview and your possibility to combine and vary the different sound sources. It's a little tricky when you've bought new sound modules that are able to play 8 or even 16 channels each - not being able to take advantage of all the channels you could use is a waste, isn't it? One solution is to make a compromise that's acceptable - or to get a midi expander that add midi outputs to the system. For Atari, there aren't that many, but the ones that are around work very well - for example Steinberg's Midex and SoundPool's MO-4, both with 4 extra midi outputs. All of a sudden there are a total of 80 midi channels available - something that should be enough for most setups...

Delays and timing problems
When you connect several synths to the same midi output, you normally connect the output of the computer to the input of the first synth, from that synth's midi thru (which is a copy of what comes in) to the next synth's input etc... This might give you some problems, like delays, faulty or lost notes and various strange things. Delays can occur if the synth has a "soft thru", where all the midi data goes into the CPU of the synth before it's sent to the "soft thru" output. This can vary a lot between various manufacturers and in some cases, the "soft thru" is only programmable, meaning that the input goes directly to the thru without going via the synth's CPU. In such cases, there aren't any delay problems. Another possible problem is that when you connect a chain of several thru-in-thru-in, the sensitive signals are damaged when they come thru a lot of optocouplers and conversions. This process is very quick and the possible delay in this case can be disregarded (it's a matter of nanoseconds) - but after many such conversions, the signal starts to get weaker and the waveform edges loose the sharpness they need to define the data, which might lead to faulty or lost tones. In some cases, this can also be misinterpreted as delays, but that's not the case. The solution of both these problems is simple: a thru-box, which has one input and several thru connectors with an exact copy of the original. Then you connect those thru's to all the synths' inputs and you don't have any risk of these causes.

On to the next problem: Speed
Midi is a serial communication with the speed of 31.25kbaud and there's a lot of information that should be squeezed in to get a tight sounding result. 10 bits (digital 1 or 0) build up one midi word. The first bit is a start bit that's followed by 8 data bits and last is a stop bit. Each midi word takes 320 microseconds to send. The first midi word is always a status word, containing which midi channel is used and which type of information will be sent in the next word. The next midi word is a data word, which can be different depending on which kind of information should be sent. Every data word is also 10 bits. When you play music, the most common data word is probably "NOTE ON", which consists of one status word and two data words each, which means you have to send three midi words for one tone to be played. Here is an example of the total sending time for a rather common arrangement: One synth with a chord of 4 notes, 4 synths with 2 notes each, bass drum, snare, hi-hat and congas at the same time - already with this simple arrangement, there's 20 notes that should be sent so that they appear to be simultaneous. It takes 19200 microseconds (20 notes*3 words per note*320 microseconds per word)=19.2 milliseconds. This means that the time difference between the first and the last note will be about 19 milliseconds since the serial communication makes it impossible to send it all exactly at the same time. 19 milliseconds may not sound like much, but this may cause trouble in some cases even so, depending on how sensitive you are...

In a very complex arrangement, it's even more apparent - not to mention what happens in modern dance music, where it's very common to use sweeping filters in real time - something that's capable of destroying any system's midi timing. Keep in mind that every midi event takes it's time and causes delays in the midi signal flow because it's a rather slow serial data transfer. In a sweepable filter in a synth, you have 256 steps from the lowest to the highest value, which means you'll use 512 midi words (if the filter command uses the status word and only one data word), which says something about how much delays you might experience when playing around too much with filters. Something that came rather quickly after the original midi standard was set, was something that's called "Running status", which is used on all modern synths. This is a new standard that enables a little higher speed since when the same midi channel should get the same status word twice, the status word is never sent the second time (or the third - or the fourth...). Thanks to this, some time is saved, but unfortunately not a lot. In my example above, the time would end up at 15-16 milliseconds instead of the 19 I calculated... The speed problems in the midi standard are rather severe, since you can hear a fault of about 50-100 milliseconds without any problems - it sounds like it's not "tight". To understand how fast it is, imagine a clock ticking each second - you hear the seconds tick away. Imagine something that ticks 10 times as fast - this will still sound like ticking and the time between those ticks is 100 milliseconds. In the example, there were 19 milliseconds between the first and last note, so it's acceptable, since you won't hear any difference. However, if there would be twice as many notes or some real-time filter sweeps, pitch -bend, modulation, aftertouch etc, it would be hard to make it sound good. The solution for all this is to get more midi outputs - for Atari an MO-4 or a Midex will give 4 extra outputs and take the load off the one that's used otherwise since the midi data can be split between all 5 outputs. In my own studio, I've connected my Yamaha CS-1x to an own output, so I won't risk any problems when I feel like fooling around with some real-time filters. It's also sensitive when you want to automate a midi-controlled mixer. Then all the synth notes and the mixer's changes should be sent at the same time and a single midi output would be close to impossible to work with then. It's much better to use an output exclusively for the mixer instead.

The speed problems in midi do make some trouble, but when the standard was set, the electronics were much more expensive than today, as were the cables. This led to the current standard, since it became possible to use rather cheap cables with reasonably long lengths (up to 15m) and not having to use expensive doubly screened cables or more expensive interface circuits. There actually was a suggestion of a new standard with a doubled baud rate, but it never became reality, unfortunately.

As usual, the readers are welcome with questions and views. Tell me what you want to read about.

Claes
claes@holmerup.com
www.holmerup.com