header image header image

Sound Advice

Location CD Recording: Miking Techniques by Earl McCluskie

June 19th, 2009

Location recording of non-live events has its pros and cons. On the pro side are natural acoustics, a unique sonic character that can give the recording a distinctive sound, prestige from the name of the facility, and sometimes lower rental costs. On the con side are external noises, little or no control over the early reflections and reverberation, difficulty isolating musical elements, and less than ideal control room monitoring conditions.

If the cons can be overcome, or ways to successfully deal with them found, good recordings can be made. These recordings do not have to be limited to just classical recordings, which typically are recorded in natural acoustics, or “live performance” environments. As an example, a 40-voice choir backed by piano, bass, and drums singing contemporary jazz-influenced music can be successfully recorded in a natural ambience.

The choir sound that one would naturally pick up in a church or concert hall using mic techniques associated with classical choral recording would have a significant amount of ambience and depth, suitable for that style of music, but not with the sort of warmth and presence that is associated with a contemporary “pop” sound. A good hall acoustic has a life and character that only the best studios can emulate, and so it is often worth finding a way to capture this sound.

Close-miking the choir would defeat the advantage of the hall by suppressing its natural attractive acoustic. Even the best cardioid pattern mics have significant colourations resulting from their uneven off-axis response, and these often do not compliment the room acoustics. A carefully-placed array of three or four omni mics over the choir can produce a natural-sounding pickup.

Make sure that choir members are as equally distant from the mics as possible, with the lower voices singing directly on-axis to the mic, and the higher voices projecting slightly below the 0-degree axis of the mics. The distance between the mic array and the choir will also depend on the ratio of direct to early reflection balance that sounds best. Use two additional omni mics placed behind the choir to pick up the warmth of the choir, and give additional boost to lower male voices, which tend to be more omni-directional.

Earl McCluskie is a producer/engineer and Owner of Chestnut Hall Music, a music production company based in the Waterloo region of Ontario. The company specializes in location CD recording, both live and session. Recent projects have included Vancouver-based composer Timothy Corlis with the DaCapo Chamber Singers and the Guelph Symphony Orchestra.

Mastering Pet Peeves by George Graves

April 19th, 2009

1. The majority of vocalists don’t know how to use a mic, and what is worse is that a lot of engineers don’t know how to teach them to use a mic. As you’d imagine, the outcome sounds pretty bad. That’s the most irritating scenario for me. I find sibilance is very harsh, and it can be easily tamed in the early stages using the proper mic and technique for a certain vocalist. Sometimes you need a de-esser in conjunction with a compressor/limiter, and of course, using the best-sounding A/D converter possible is key. I know a lot of these lower-price rigs don’t give you that capability, but as a sound engineer, you need to strive to get the best you can possibly afford – and in some situations, less is more. Adding a lot of EQ after the fact rarely helps the recording.

2. Another thing with vocalists is pops. I find that because the vocal is usually one of the loudest instruments in a mix, you have all sorts of odd sounds that are normally filtered through the air when someone speaks without a mic that get captured during recording. It doesn’t help that the mic is often placed at the worst position – the jaw – so all of these bad sounds are directed directly into the diaphragm.

There are other things I could mention, but for time’s sake, I’d say those are two dreaded instances for mastering engineers.

George Graves is a Mastering Engineer at Toronto’s Lacquer Channel Mastering carrying over 40 years of industry experience.

Accurately Measuring Distortion by Wayne Jones

April 19th, 2009

There are two main areas of distortion measurement. The most common is total harmonic distortion plus noise – that’s what most audio distortion analyzers will characterize. It’s a good measurement of performance, but the area where it falls apart is measuring distortions at high frequencies, especially in band-limited devices (and so many components are band-limited). All digital systems are limited to half of the sampling frequency, so they’ll automatically be limited with the anti-aliasing filters to 20 kHz or so. That means it makes no sense to measure THD above around a third of that frequency, so the THD readings you do above 7 kHz don’t mean anything. They’ll give you a number, and it’ll probably look really good because the filter is rolling off the harmonics, so it’s really just measuring noise – not distortion.
That doesn’t mean there’s no distortion at high frequencies. Your ear will certainly tell you that there’s indeed distortion. How do you characterize that? In a band-limited medium, such as analog tape in addition to new digital systems, intermodulation distortion measurements are a way to characterize higher frequencies. One type of IMD measurement is the so-called “Twin Tone,” where you take two high-frequency signals (15 and 16 kHz, or 18 and 19 kHz) and you look for the difference frequency component at 1 k. That will give you a true, accurate, and usable characterization of high-frequency distortion – right up to the band edge limit. If your system cuts off at 22 kHz, you could measure 21 and 22 kHz and get a true characterization of distortion at high frequencies.
This method was discovered in the ’40s while measuring optical film soundtracks on which all film sound was done. It was an optical track on the edge of the film, before it started being striped with a magnetic coating for magnetic soundtracks. The problem there was that the upper frequency limit of an optical soundtrack was 7 kHz, so all of those early films from the ’40s, ’50s, and even the ’60s stopped at 7 kHz. People in the film industry and SMPTE recognized that total harmonic distortion measurements above 1 or 2 kHz were meaningless, so they came up with the SMPTE Intermoduldation Distortion Method which used a 7 kHz and 60 Hz signal and measured the inter-modulation products developed from that. It ended up being a realistic, accurate, and useful characterization of the distortion of an optical film system.
My advice is that if you’re looking at a band-limited device, as most things are now, be careful measuring THD above a certain frequency, and use other techniques to get a better characterization of what’s really happening.

Wayne Jones has almost 40 years of experience in the pro audio and audio test and measurement fields. He’s served on various standards and has been a consultant to companies like Intel, Microsoft, and SigmaTel in recent years.

Following The Golden Rule by Andy Hermant

February 19th, 2009

Andy Hermant believes a mix of analog and digital recording techniques is the ideal way to work. PS asked which stages should be performed in which domain.

I call it my Golden Rule. First, you’ve got to have a great song. Nothing starts without a great song. Then you need to have a great performance. If you don’t have the song or the performance, you’re still nowhere.

But after that, grab the best microphone you can find, the best preamp, and the best converters. Then comes the all-important mic technique. Once you have that pure source material, you can bring it into the digital domain and go crazy and manipulate the recording to get the most out of it.

If you don’t have the goods going in, you won’t have the goods going out.

Andy Hermant founded the Manta Sound Company, Canada’s first digital multi-track studio, Duke Street Records, and 1:2:1 Recording. He was manager of Post Production at the CBC for 13 years and has served on the boards of CARAS, CIRPA, FACTOR, and Roy Thomson and Massey Halls.

Creating A Better MP3 by Noah Mintz

February 19th, 2009

About two years ago, I set out to create a better-sounding MP3 file. I tried all the different encoders, bit-rates, and technical options. To my ears, there wasn’t much of a difference – they all sounded bad.

As a mastering engineer, it disappointed me to hear the musicians’ hard work end up like this. In the end, it didn’t matter why MP3 was technically inferior; all that mattered was that it didn’t sound as good as the 16-bit 44.1 kHz source, not to mention the 24-bit masters from which the CD was made. I concluded, then, that encoding a better MP3 was impossible. So now what?

MP3 was not going away. Even now it’s still the most-used and player-compatible lossy compressed format for audio, and I imagine that it will be for some years to come. So, if a better MP3 through improved compression and encoding is not possible, is there something else that can be done? The answer is yes. Create a better mix.

Creating a better mix creates a better MP3. Yes, of course this is obvious – but maybe not for the reasons you might think. Just like engineers in the ’60s, ’70s, and ’80s mixed with the limitations of lacquering (the vinyl record master), I believe mixing engineers should mix with some degree of awareness of the limitations of MP3 since it’s the way most people will listen to recordings. The good thing about this is that mixing with MP3 in mind will also create better mixes for CD or high-resolution production.
Here is a short list of tips with explanations:

  • Limit your limiting. Digital peak limiting during the recording or mixing process raises your noise floor, reduces your dynamic range, and adds to your overall distortion.
  • Use your available bandwidth. Mix to 0db. That means your peak should be at or near 0dB. For every bit of “headroom” you leave, you lose perceived bit resolution.
  • Record at the highest bit rate and sample rate possible. As Rupert Neve once pointed out, harmonics of higher frequencies exist in lower frequencies. Recording at higher sample and bit rates, even if the final source is 16-bit 44.1 kHz compressed MP3 files, will still sound better.
  • Avoid dithering. Despite what people say about dithering, it really only sounds good when added at the end conversion from 24 to 16. Dithering adds noise and noise doesn’t compress well.
  • Be aware of the MP3 compression process. Mid-range frequencies (vocals, guitars, snare drums) compress well. Everything else, especially broadband noise (bass, cymbals, air frequencies), are difficult to compress and can distort your MP3. Use this knowledge, especially in the mastering process, to shape some of your EQ decisions.
  • Monitor the side (difference) channel. Make sure it’s not distorting. Distortion in the side can really mess up the compression process. To monitor the side channel, invert the phase of either the left or right channel of the master bus and then put it in mono. The resulting audio will be what you want to make sure is not distorting.
  • Use a mastering studio that understands the MP3 process. There are limitations of MP3 that go beyond the audio quality. If you take one song and download it from different sources (legal and non-legal), you quickly realize that there is no standard to MP3 creation. They go from bad to worse. Beyond that, the metadata (the artist information that’s embedded into the file itself) is not consistent. Lacquer Channel (my mastering studio) launched enhancedMP3 in January 2009. It’s a CD-ROM portion on the audio disc (much like enhanced CD) that contains the highest-quality, artist-approved 320 KB MP3 files available. We use a custom proprietary process to ensure the file compresses the cleanest way possible. Read more about it at www.enhancedmp3.com.

MP3 is here for a while. Using some recording and mixing smarts, and using a mastering studio that understands the limitations of an MP3 file, will go a long way to ensuring that the sonic intent of the music is not lost.


Noah Mintz is a Mastering Engineer at Lacquer Channel Mastering and the creator of enhancedMP3.com technology.

In-Ear Monitors: Tips & Tricks: Part 1 by Keith Gordon

August 19th, 2008

In this issue, we’ll look at some tips and tricks I’ve picked up over the years for better working with in-ear monitors (IEMs), plus some interesting problems I have helped people overcome.
The major starting point for any IEM, or even MP3 player earphone, is fit. Without good fit, you are fighting a losing battle for quality sound right from the start. At the recent NAMM Show, I had a couple of musicians ask me why they could not get any more gain out of their lead vocalist’s IEM system. The minute they added the keyboards to her vocal mix, it compressed the vocal and caused the limit lights to activate on the transmitter and receiver belt pack.
As I continued to ask questions intending to dig down to the true root of the problem, it eventually was determined her IEMs fit so poorly that they fell out constantly. This indicated the “limiter” issue was really a case of her having a terrible seal at her ears. The poor seal meant she had a great deal of loud external stage noise to overcome, a good portion of the sound her IEM was managing to create escaping her ear canal, and very little low frequency content since a proper tight seal is necessary for good bass reproduction.
This aspect – the proper acoustic seal at the ear – is doubly important as there is a psycho-acoustic effect where our brains perceive an increase in bass/low frequencies as an increase in overall volume. In practical terms, this means getting a better seal for stronger bass or turning up the low end creates the effect of turning up the entire mix, but without the damaging effects to our hearing that would occur if we simply turned up the overall mix a seemingly equal amount. This allows for longer exposure times before the harmful effects of volume set in. This also means musicians should not wear their IEMs loosely with an intentionally broken seal so they can hear the outside world better. Instead, they should use audience or ambience microphones as a regular part of their monitor mix. These microphones can then be turned up by the monitor engineer between songs, so performers can keep wearing their properly sealed IEMs.
Next issue, we’ll continue with more on psycho-acoustics and other tricks of the trade.

Keith Gordon is a veteran audio engineer who began using IEMs in the mid-90s. Recently, he helped develop a DSP-based hardware/software IEM system in conjunction with Westone Laboratories. He can be reached at keithgordonca@gmail.com.

Contact

4056 Dorchester Rd., #202,Niagara Falls, ON
Canada L2E 6M9 Phone: 905-374-8878
FAX: 888-665-1307 mail@nor.com
Web Site Produced by Norris-Whitney Communications