header image header image

Archive for the ‘Uncategorized’ Category

Staying In Synch Part II: Jitter Is A Four-letter Word by Bob Snelgrove

Tuesday, February 18th, 2003

Jitter is a word that we hear often and manufacturers often quote. When we hear “Low Jitter” we all know that this is a good thing – and it is. The causes and cures for jitter are particularly complex and many discussions of jitter are very misleading. A quick overview of jitter is important because it is the most common source of timing errors, poor synchronization and ultimately bad sound in the digital studio.

I am going to borrow Julian Dunn’s definition of jitter, which is: “The variation in the time of an event – such as a regular clock signal – from nominal.” The waveform representing the Word Clock signal is a textbook perfect example of a square wave. In the real world a “perfect” square wave would have a finite rise and fall time with some under and overshoot. At the end of a cable it would be somewhat less than square, and in many situations where long cable lengths or bad termination were involved it would not be square at all and would become a problem causer instead of a problem solver.

Jitter is – for whatever reason and however caused – when the transitions in the square wave do not line up in time to the expected or nominal periodicity compared to the expected frequency. Jitter is created when the edges, due to their distortions, are misinterpreted as timing signals. This is a dynamic condition and the timing information will be random and constantly changing sometimes ahead and sometimes behind. The word “jitter” is appropriate because this change from nominal is never a stable change; it is always one that varies. In effect the timing information jitters back and forth in time.

Many different conditions can cause jitter to be produced. One of these is any interference signals that modulate the Word Clock lines, for example a ground loop. The type and severity of jitter will be determined by the ground modulation as these ground signals can range from 60 Hz hum all the way up to dimmer noise.

To properly measure or observe jitter it is necessary to first have a stable frequency reference, and then a method of dynamically comparing this reference to the actual signal. Many people attempt to measure jitter by simply looking at a Word Clock or AES3 waveform on an oscilloscope. I have one magazine article that erroneously came to conclusions on several high-end Word Clock products using several completely faulty methods to make their jitter measurements and aural comparisons.
The reason we are concerned about jitter is because each internal clock in our studio looks for and locks onto the edge of the incoming Word Clock square wave and uses the distance between each transition to determine the operating frequency of the Word Clock which it then uses as its own timing reference.

If the edge of each Word Clock square wave is occurring at different times or the Word Clock receiver is triggering on it at different times then each clock will have slightly different time bases – exactly the problem we are trying to eliminate.

Jitter is also a concern because it can be caused in so many ways. It is common to find Word Clocks that generates very low jitter from their internal reference but pass on high amounts of jitter when locked to an external reference like Video Sync or Time Code.

Excellent lock between an external reference like SMPTE time code or Video Sync and Word Clock are only excellent if they lock without transferring any jitter.

If you have a master Word Clock generator in your studio and everything works beautifully until you lock your Word Clock generator to incoming video, then high jitter passed thru to your Word Clock outputs from the video is your problem.

Assuming that the master Word Clock generator has no significant jitter itself the problem then becomes one of minimizing jitter that can easily be induced or transferred into the timing signal as it moves from device to device and, in a larger facility, from room to room. This involves paying careful attention to cables, connectors and termination.

Next to poorly designed digital recording equipment and poorly designed master Word Clocks, cables, equipment interconnection and improper termination are the most common causes of timing errors because they are a direct cause of jitter. All Word Clock signals must be connected using RG59 coax cable and properly terminated. RG59 cable has a characteristic impedance of 75 ohms and the BNC connectors that go on it are specifically designed for use at this impedance. RG59 cable provides a signal path for Word Clock waveforms that minimize cable induced jitter when fed from a 75-ohm source and must be terminated with a 75-ohm termination to properly function.

Cable runs must be kept as short as possible and separate isolated Word Clock outputs should be used to feed each Word Clock input, if at all possible. The most jitter critical equipment in your studio are your A/D and D/A converters. To ensure the lowest possible jitter these must be physically located as close to the master Word Clock outputs as possible in order to benefit from short cable runs. To do this, mount your Word Clock generator near your converters in the same equipment rack. The metal grounding of the rack will also help minimize ground induced jitter.

While it is common to use BNC-T connectors to distribute a single Word Clock output to several devices, this must only be done after you are certain that the 75-ohm termination for each Word Clock input you are looping to can be shut off. Restrict the use of BNC-Ts and loop no more than three Word Clock inputs. Restrict your looped cable lengths between Ts to very short distances and remember that the last BNC-T in the chain must be terminated.

A Word Clock output cannot be terminated by more than one input. Double termination is a common problem when looping Word Clock using BNC-T connectors. One termination per Word Clock output is required for optimum performance. No termination or more than one termination per output will cause excessive jitter and seriously degrade performance. Beware of 50-ohm coax cables and connectors – they are not the same thing.
It is well accepted that Jitter introduced into the AES3 bit stream manifests itself in clearly audible sonic degradation. Low jitter generation of reference Word Clock timing signals, their jitter free synchronization to video and SMPTE and their proper distribution between digital audio devices throughout the studio will result in the precise timing of digital audio bit streams as well as deliver a noticeable improvement in the quality and subtlety of recorded and mixed music as well as the reliability of transfers, spotted sound effects and layback to video.

Bob Snelgrove is the President of GerrAudio Distribution and the Canadian Product Specialist for Audio Precision test instrumentation.

Safety Tips While On Tour

Wednesday, December 18th, 2002

It’s always a good idea for at least one member of the crew to be trained in first aid techniques. It may as well be you… In the meantime, there are self-help and precautionary steps that everyone involved in PA work can take. First of all, here are some quick tips on preventing back strain when lifting and moving heavy gear.

— First of all, if the object looks like it’s too heavy for one person to lift, just get some help. Forget macho – how macho is it to be laid out in a hospital bed in traction?

— If you are going to tackle it yourself, think of the following word: BACKUP. It stands for the following:
Back straight – don’t curve your spine
Avoid stretching – keep the object close
Clutch firmly – get a good secure grip
Knees bent – helps with balance
Use your legs – let them take the strain
Putting down – do it the same way

If you’ve wrenched something, bashed something, cut something or you’re just generally feeling poorly, think on this: Hospitals are no friends of minor complaints, and in some countries treatment is uncertain and expensive. Or you might be stuck on a festival site, feeling ill, but too badly needed to leave. Or say you witness a fellow crewmember lying injured, and there’s no one else to help them…

Assistance could be at hand, in the form of a book like The Family Guide to Homeopathy by Dr. Andrew Lockie, which has some sound advice on first aid and ‘bodily disorder’ treatment, using homeopathic remedies where appropriate. The remedies listed can be safely self-prescribed and are low-cost. A basic first aid kit of about 20 types of ‘remedy’ pills, one tincture and five creams covers most situations – from burns, crush injuries, weird food poisoning, sprains, smog fumes and all manner of other minor troubles that stop you from giving 100 per cent.

Of course, if the injuries are plainly serious, or first aid doesn’t ease matters fairly quickly, or symptoms worsen, immediate hospitalization is advisable.

This article is reprinted with permission from The Live Sound Manual, published by Backbeat Books, www.backbeatbooks.com. All information is copyrighted and cannot be reprinted without the permission of the publisher.

Staying In Synch – Part I: Word Clock Explained by Bob Snelgrove

Wednesday, December 18th, 2002

Everyone using more than one piece of digital audio equipment should be concerned about the quality of their studio’s Word Clock. This article will explain the critical role that Word Clock quality and distribution plays in the digital audio environment and the audible effects that poor clocking has on digital music systems. Examples of common mistakes and suggestions for proper hook-up will be made.

I need to start off by saying that when it comes down to quality of sound, I tend to approach the wonders of digital audio with a healthy degree of caution. I love my CD player and I personally own and enjoy lots of other digital audio toys. Nevertheless there are still many problems with the digital representation of audio that have yet to be solved. All of these problems relate to digital audio’s sonic accuracy and transparency.
At best, well executed word clock generation, synchronization and distribution will make your studio sound audibly better and allow you to become a better artist, engineer or producer. At worst it will simply eliminate countless technical gremlins that will make you inefficient, ruin your mixes and drive you crazy.

Timing
Despite the many similarities, compared to analog audio, digital audio has a couple of unique and major differences. The first is that an analog audio signal is a continuously varying signal, which is represented digitally by a limited number of discrete numerical values. The second is that these numerical values represent the analog signal only at specific points in time or sampling instants rather than continuously at every moment in time. Sampling instants are determined by various devices and processes, the most critical being Analog to Digital and Digital to Analog conversion. Converters are responsible for transforming an analog signal into a digital representation and back again, this is where it all starts and this is where it all ends.

A sample clock determines when these sample instants occur. All digital audio devices have some form of a sample clock to control their internal sample rate or sampling frequency. In a studio where we integrate many different pieces of equipment that all depend on their own clocks to function we will invariably have sample instants taking place at different times unless we synchronize all the clocks in each piece of equipment and tie the timing of these events together.

This synchronous timing is required because unlike analog audio, digital has a discrete time structure consisting of individual samples. Successful communication between different digital audio devices or the mixing of different digital audio signals together will fail if each device is not producing their bits of data in precise co-ordination with each other.

Poor quality timing between multiple digital audio devices or the improper distribution of those timing signals will result in non-synchronous operation and the result will be the creation of random and highly audible artefacts, often described as clicks, pops or glitches.

There are two standard timing signals used to synchronize the internal sample clocks of digital audio equipment. The first, commonly used in large post-production and broadcast facilities is the AES3 Digital Audio Reference Signal or DARS for short. This bi-phase signal’s carrier is exactly the same as a balanced AES3 signal but no audio data (digital zeros) in the data stream. An XLR sync input can usually be found on pieces of high-end audio equipment and workstations. DARS is distributed the same way as AES3, via balanced 110-ohm digital audio cable and AES3 XLR type connectors.

There are two things that make the DARS or Audio Black signal particularly attractive. The first is its high frequency of operation, which is between 2 and 3 Mbits per second. The second is the fact that the professional AES3 interface is balanced and ground isolated making it relatively immune to induced noise, which can be a major source of jitter.

The most common clock distribution method however is Word Clock. The Word Clock waveform is a simple unbalanced square wave. Word Clock is designed to be distributed on 75-ohm, unbalanced coax cable terminated with BNC connectors. In order for synchronous operation to take place all digital audio devices must be fed one of these timing signals from a master reference Word Clock time base. These are typically referred to as Word Clock generators and sometimes as synchronizers depending on the functionality they provide. Great care must be taken to carefully distribute these Word Clock signals to each piece of digital equipment in the studio or the timing signals will be degraded and audio quality will suffer.

Word Clock Generation
The most basic requirement for a Word Clock generator is that it must be able to produce high quality stable square waveforms. The square waveform will need to be at one of two frequencies of either 44.1 or 48 kHz also referred to base frequency or Fs. The generator must also be able to produce industry standard multiples of either of these two base frequencies yielding Fsx1 thru Fsx4. This x1 thru x4 multiplication generates the common sample frequency sets we are all familiar with in digital audio and generates Word Clock frequencies in the range of 44.1 to 192 kHz.

A special case of declining interest and usefulness is Digidesign’s proprietary SuperClock at FsX256 or a frequency between 11.289 and 12.288 mHz. Because different manufacturers design their products to accept different multiples of Word Clock base frequencies, all multiples from one to four must be supported, and the master word clock generator must be able to produce different Word Clock frequencies from different outputs simultaneously.

It is important to note that not all BNC word clock outputs are created equal. When distributing Word Clock signals, isolation and proper source impedance from each output BNC is important. If the outputs are simply fed from one low impedance source then it will be impossible to correctly terminate a line and a single bad cable or poor connection will reflect back and compromise the performance of every other line. This is one condition that would also create jitter.

Join us next issue, where we continue are look at synchronization in the digital world, tackling the topic of jitter.

Bob Snelgrove is the President of GerrAudio Distribution and the Canadian Product Specialist for Audio Precision test instrumentation.

The Musicality of Mastering by Marisa T. Déry

Wednesday, December 18th, 2002

In this article I will be writing about “The Musicality of Mastering”. Although I will touch on some technical issues, I’d like to focus on the creative process of mastering. The mastering engineer’s role seems to be changing a bit. Whereas before a person would walk into the room and I would EQ it in the best way that I could (adjusting levels etc.) now I’m actually putting more and more special effects in the mix – record noise, backwards snare, flange on a section of a song (à la Britney) – people are asking for my input.

First, I would like to touch on a much talked about subject amongst Mastering Engineers: L-O-U-D-N-E-S-S

Play a CD that is five years old, then play a new release, and you will hear that the difference is staggering.

Ex. Marvin Gaye’s “I Want You” (Marvin Gaye’s Greatest Hits, Motown) then Linkin Park’s “One Step Closer” (Hybrid Theory, Warner Bros.) then Marvin Gaye again.

What is happening now is that music is getting louder and louder at the expense of dynamic range. In the early ’90s, the reference level was -12dB on most DAT players, which is why many old players had a line at -12. Then came the finalizer and people began setting their levels to 0. The problem was that every DAT player manufacturer had a different reference level for 0. Makers of consumer DATs would set the meters hot so that inexperienced users wouldn’t distort their recordings. 0 wasn’t “0” anymore. The finalizer made things worse because you could set the mix with an OUT ceiling of -0.3dB (which is the recommended maximum for CDs), yet still make your program louder and louder (while still remaining at -0.3dB).

The question is, “When is loud TOO LOUD?” All that I can say is that you need to leave room for the music to breathe. People are handing me mixes at 0dB, because the engineer cranked up the finalizer or the limiter conveniently located in the studio. Engineers are concerned that their clients won’t be impressed with their skills, so they give them a “finalized” mix where there is absolutely no room for me to do anything. 0dB is also dangerous because many CD-burner towers assume that if the program is peaking at 0.0dB it must mean that it is overloading, and promptly rejects all the CDs being duplicated (it’s quite impressive to see all those CDs popping out with flashing lights by their side).

A Good Mix
A mastering engineer relies on getting a good recording and mix to do his/her job properly. Too often musicians run out of money after the mix, and are never really satisfied with their mix. Your job is for them not to remember that they didn’t like their mix.

Analog vs. Digital EQ
There is importance in combining noise to the chain going to the DAW. Generally, I will extract a mix from a CD with the Adaptec Toast Extractor into the computer. It will be imported into Pro Tools where I have an endless amount of Audiosuite plug-ins (bells and whistles). This faster technique is used because the client wants everything to be done as quickly as possible. “Extracting” unfortunately (fortunately?) is faster than real-time loading. What I prefer is a DAT or CD master, where I will patch it into an analog EQ then go to Pro Tools. Those mastered mixes, to me, sound human.

There is a breath in those mixes that I cannot replicate with digital processing; there is a noise, a life to those mixes. One must never forget that what you are mastering is music. An artist puts time, energy, emotion and passion into those songs. Out of respect to the artist and the music, you have to make that mix breathe and come alive. You can’t process it to such an extreme that there is no dynamic range, no peaks and valleys, no life. It’s just a block of noise, a block that you can beautifully see in Pro Tools or any other program (ex. the L1 set at 12dB threshold).

Audio Restoration
Another thing that I would briefly like to touch upon is Audio Restoration. Whether you are dealing with old reels or 78 RPM records try to make them sound as natural as possible. There are many outboard EQs and software plug-ins for that purpose. The Waves restoration package is one that I use a lot. Yet even there, you must listen with musician’s ears.

Resist the temptation to get rid of the entire hiss, especially with orchestral music! It’s not only about the sonic quality; it’s also about the music. Be creative when you are working on these programs. I have a little Casio keyboard at work and when I can’t figure out what frequency is humming at full volume (I’m stuck and/or tired), I’ll grab the Casio and find the note on the keyboard. I have a chart that associates the notes of a piano keyboard with frequencies, so if the note (or hum) is Middle C, I’ll look at the chart, and find that I need to notch out 261.63 Hz – it’s a start.

Which brings us to the creative side of mastering …

Recently, I did a project where the artist came up to me and said, “I have three songs that are mixed, and one that is unfinished. I have an appointment with the A&R rep at DreamWorks on Friday … help me.” So I listened to the said “unfinished” song and began throwing suggestions.

– Why not throw a Janet Jackson type drum loop at the head?
– During the Tag section at the end “chorus” it.
– When it comes back “flange” it.
– In the beginning, listen to the lyric. Play with it.
– Pan to the left when you say, “left”.
– Pan to the right when you say “right”.
– Make it move.

I am now listed as “Remix Engineer” …

There are a lot of plug-ins that you should play with – too many to cover them all, but the L1 Ultra-maximizer (now L2 Ultra-maximizer) must be mentioned, as it is now a must in all productions. Before it was recommended that one should set the attenuation meter/setting between -3dB and -6dB; now anything goes, so all you can really do is match the levels of a new release.

But be aware! It has been said (AES Conference, New York, 2001) that 9 out of 10 songs on Billboard’s Top 10 are distorted, and that songs from the 1970s sound technically better and have more dynamic range than songs released in 2001-2002. We have become a generation of “distorted” listeners (it’s no wonder that teenagers today will be partially deaf by the time they reach 30). Hopefully 5.1 technology might help ease the loudness wars.

In Summary
I’m writing this article because of my concern with where we are going with the loudness wars. I am an engineer by trade, and a musician by birth. I have pursued a career in engineering because of my unquestionable passion for music. I respect Creators and Performers that bare their soul to tape. They rely on the Basics Engineer, the Overdubs Engineer, the Assistant Engineers, the Mixing Engineer and the Mastering Engineer to preserve the integrity of their music to tape. It is our duty to understand their music, their art.

It’s not just about putting a mic in front of an instrument and pushing the record button, or adding highs and lows in the mastering process. It’s about understanding what you are recording, mixing or mastering. Using your instincts to make it sound right. As a technician, your job is to make the music sound as sonically perfect as possible; as a human being, your job is to make the music sound as human as possible (with or without noise).

Marisa T. Déry, a native of Ottawa, Canada, is Chief Mastering Engineer at the Tape Complex in Boston, MA. Her clients include the Mighty Mighty Bosstones, Tugboat Annie, Scientific, Chapter In Verse and RUSHYA.

Good Amps and Power Efficiency

Wednesday, December 18th, 2002

PA amplifiers need to combine the delicacy of a good hi-fi amp with the robustness and reliability of a farm tractor, blending (increasingly) with the low weight and compactness of aeronautical gear.

Good-sounding power amps (ones which add minimal colouration or distortion to the signal, purely making it louder) require great sophistication to enlarge and deliver the signal very precisely over a wide ‘canvas’ of levels and frequencies, while also delivering high currents and voltages.

And these quantities are not delivered into docile power-absorbing elements, but instead into speakers, which are quite complex and ‘reactive’ in the way they interact with the amplifier.

No power amplifiers are 100 per cent efficient – even the best manage only about 80 per cent in reality. The best speakers, meanwhile, only approach 25 per cent efficiency. Best overall efficiency is consequently about (0.8 x 0.25) = 20 per cent.

The average overall efficiency figure is more often between five and ten per cent. Taking ten per cent as an approximate figure, this means to get a certain amount of acoustic power – in other words music at a suitable sound level – in the room, we have to provide about 10 times that power from the electricity supply. And so this is the amount that an audio power amplifier has to handle and ‘process’.

We’ll also want to have some power capability in reserve – since inadequate power results in amplifier overload and bad sound. In general, erring on the side of over-rating is better than under-rating.

And remember that the relationship between watts and loudness isn’t proportional in the way you might imagine. As a reminder, a rule of thumb is that you need to increase the power delivery into any particular speakers by at least tenfold (x10) to attain about twice (x2) the audible level. This appears on a sound level meter as a 10dB higher SPL (sound pressure level) – so, for example, if 100 W gives 90dB SPL, 1,000 W will be required to increase the level (where nothing else is altered) up to 100dB SPL.

In short, much, much more power is needed than you might expect.

This article is reprinted with permission from The Live Sound Manual, published by Backbeat Books, www.backbeatbooks.com. All information is copyrighted and cannot be reprinted without the permission of the publisher.

Recording The Lead Vocal

Wednesday, December 18th, 2002

How Many Tracks Is Too Many?

More often than not, the lead vocal is the track that contains the most emotional content of the song. With repeated attempts at recording the vocal, you run the risk of losing that emotion and “magic”. So while it’s ideal for the singer to nail the perfect take in one or two tries, a good engineer knows how to respond the other 90 per cent of the time.

The answer is to compile the best elements of a few different takes into a single, composite performance where each line, each phrase and even each syllable is sung just the way you want. This process is called “comping”. It’s done on nearly every record you hear, even the ones you’re convinced are single, complete takes.

Tip: If the singer is hesitant to record this way, claiming “artistic integrity”, remind them that they’re free to sing the song through from top to bottom, without interruption. Meanwhile, just switch tracks while you’re winding back to the top after each take. (Make sure you’re only sending the current take to the headphone mix – it can be very disconcerting for a singer to begin a song and hear two voices coming out of his mouth.)

In this digital age of virtually unlimited available tracks, it’s tempting to record 5 or even 10 different takes before comping the vocal. But using that many can really overwhelm you and confuse the process. Try utilizing two or three tracks instead. Starting with your first take, tell the singer it’s only a practice take for the purpose of further level adjustment (when in fact you’ve already adjusted everything and are ready to go). This is useful for anxious singers, taking the “pressure” off them.

After two or three takes, stop if you have terrific performances overall. If not, go back to the track with the least inspired take and record over it. Hopefully, you have gained the singer’s trust by now and don’t need to inform them of these details. Continue with this process until you feel that, within those two or three tracks, you have the makings of a great performance.

When you’re ready to start comping, draw lines on the lyric sheet so you can make little notes (check marks, yes, no, good, bad, maybe) on each line of each take. Involve the singer in this process only if they insist – the more they analyze their own performance, the less they’re likely to respond with an inspired, heartfelt one. Once you have usable takes for each line, bounce the winners onto a fresh track (you can also bounce certain lines from “alternate” takes into one take that just needs a few fixes).

Tip: After you have a comped vocal, get away from it for a while (dinner break, TV break, whatever). Then listen to it with fresh ears, and with the singer, to see if you still need to fix something.

This article has been reprinted from the Studio Buddy software. Written by acclaimed producers/engineers Michael Laskow and Alex Reed, Studio Buddy gives hints and tricks on various recording techniques. To download a free copy, go to www.studiobuddy.com.

Contact

4056 Dorchester Rd., #202,Niagara Falls, ON
Canada L2E 6M9 Phone: 905-374-8878
FAX: 888-665-1307 mail@nor.com
Web Site Produced by NWC

"