header image header image

Archive for the ‘Uncategorized’ Category

Summer Survivor: A Guide to Successful Festival Gigs by Fred Michael

Friday, April 18th, 2003

Some of you will be heading out this summer on the outdoor festival circuit, having gotten your sound mixing experience mostly indoors, on the bar circuit. If this is new territory for you, here’s a quick survival guide.

Advancing Your Shows
Phone all the sound companies involved well in advance; it’s best to talk with someone actually working on your stage, although this is not always possible. This first, real-time contact is important in establishing a personal connection; use e-mail for subsequent communications. All you want to do with this call is let them know who you are, find out who you should send your technical requirements to and get a quick rundown on the rig you’ll be using. If you have any special requirements, mention it at this point but remember to repeat this request in your correspondence so that the importance of having it is clearly indicated to the supplier. Of course, without a signed performance contract with your technical rider attached, there are no guarantees and, even with a contract, be prepared to work with whatever is there when you arrive; a calm attitude and an open mind will pay big benefits.

E-mail or FAX a stage plot and input list to all of the sound companies after you’ve made your calls; e-mail is best, because you can update your information as needed and the recipient can make clean paper copies. If you’ve never built an input list or plot before, consult with your more-experienced colleagues to get some ideas; be sure to include monitor channel assignments, and number of mixes; type and location of monitors will be shown on your plot. Note: unless you have a monitor engineer or stage tech traveling with you, it’s best to avoid the use of in-ear monitors on the festival stage; it could be a very negative experience for your musicians.

What To Take With You
Here’s a comprehensive list; see what’s relevant for you:

– Specialty microphones, effects or other electronics that are vital to your show.
– Basic tool kit, including multi-tool, flashlight, headphones, audio adapters, ear plugs.
– Phase checker, multi-meter, SPL meter, soldering iron, spare connectors.
– CDs of your favorite music tracks for system tuning (hey, you might get a chance!).
– Recording equipment.
– Laptop computer for e-mail and 1001 other things.

Again, pick what is relevant; if you get a couple of club dates in between outdoor shows, these “tools of the trade” will prove their worth.

At The Festival
Ideally, you will arrive at your stage a couple of hours before your set, any earlier and nobody wants to talk about your gig anyway! Visit the monitor mixer first, make your introductions, drop off copies of your plot/input list, and find out when they’ll be ready to discuss your setup. Then, go to the FOH and repeat the routine. This is your chance to hang out for a while without pressure and get a feel for the rig. Absorb as much as you can: Type of console, master fader settings, main EQ, order of inputs; check the effects, gates, and comps, and decide which of these you will be using. If you notice a console function, effect, or signal processor that you are not familiar with, you may want to avoid a steep learning curve at this point and just have the system engineer dial up what you need when the time comes; don’t worry about looking stupid, getting results is the important thing; you’ll also learn something for next time.

At this point, it’s a good idea simply to listen to the sound system for a few minutes. Is the rig comfortably within its operating range or is it verging on distortion? Are frequencies jumping out that might give you trouble on your set? Use this information to establish how you will proceed when it’s your turn at the console.

Now it’s time to focus on the stage. At the agreed time, review your entire setup with the person in charge (usually the stage manager or monitor engineer); yield as much relevant detail as possible. Stay at the stage as long as you can to ensure your instructions are being carried out, and your team members and musicians are comfortable.

Back at FOH, once the console is marked with your inputs, get out the cans and start listening to channels and setting the trims based on past experience, because you won’t be getting a line check (unless, of course, you’re the headliner!). Ask for the “FOH-to-stage” mic so you can immediately point out a miss-patch or missing input. While you are waiting for the inputs to be plugged, assign your effects, gates, and comps. If you are making a board recording, have the FOH tech look after this so you can focus on mixing. Decide now if you trust the FOH tech enough to share the mixing workload; you can get a good mix up a lot quicker, for example, if all the drum channels are being looked after by someone else for a few minutes while you dial-in featured vocals and instruments.

At last, your band is on stage. Go easy on yourself and back off supporting channels or subgroups by 3dB from their usual position until you get a feel for the level and tone that you want. Your job during the first song is to verify your trims are where you want them, the featured inputs are on top of the mix, and your effects are in the acoustic picture. Next, ensure any active gates and compressors are behaving as required. By this time, the song is probably over; in any case, now you can move on to fine-tuning your equalization on a channel-by-channel basis. If you find yourself repeatedly dealing with the same frequencies, consider doing a little overall system tuning; or you can ask the system tech what he thinks and suggest possible problem frequencies you’d like addressed. There is no established etiquette here; some techs don’t allow anybody to touch the house EQ; others don’t care what you do. It’s best to ask; if there’s a general reluctance, just move on and get what you can out of the console.

A final comment on mixing in these situations: If you’ve done most of your work in clubs, around 50′ from the PA, avoid trying to recreate that face-peeling sound outdoors, at 150′; you’ll risk driving the system into distortion, or, at the least, very heavy limiting. Working on these large, outdoor sound systems is a totally different game, where small changes in fader and EQ settings can make a big difference. Try for a big, comfortable sound with enough dynamic headroom remaining for a lead vocal or instrument to emerge from the mix when it’s needed. If you can get close to this, you know you’re in the sweet spot, and more volume only means less quality.

When it’s all over – no matter how it’s been for you on this particular day – don’t forget to thank the festival sound crew for their efforts; it’s a tough gig at the best of times. Swap contact info with the folks that particularly impressed you and, then, you are on to the next adventure.

Have a great summer!

Fred Michael is President of Rocky Mountain Sound Production Services in Vancouver, BC; June 2003 marks the company’s 18th consecutive season as supplier to the Vancouver International Jazz Festival. Fred can be reached at fred@rmsound.com, or via the Rocky Mountain Sound Web page, www.rmsound.com.

Practical Production Solutions

Tuesday, February 18th, 2003

Here are six helpful tips to get you out of some of the most common situations.

— Say the direct feed from the guitar amp is horribly buzzy and noisy – it sounds like an earthing problem, but there’s no time to trace and fix it. And the precious VIP guitarist won’t allow any backline cabs to be miked up … So what do you do? One answer is to use a spare backline amp, fed with the guitar signal, placed under or beside the stage, and mike this one. A noise gate, properly set up, can also quieten the guitar buzzes between notes.

— A safety official (with crinkly yellow jacket and clipboard) has condemned the tall stack of out-front PA cabs as unsafe. Solution: ask the venue management where the rigging points are in the ceiling. Rigging straps or suitably rated ropes are then used to secure the stack to the rigging points.

— To avoid the clutter and visual obstruction caused by bulky floor monitors, one (high-budget) solution – as used by Pink Floyd, among others – is to use under-stage monitoring, with the monitor cabs pointing up from beneath open grids fitted flush into the stage floor.

— When the PA is flown, it’s possible that the front rows of the audience might miss out on some of the signal – the sound can travel over their heads and they only hear the monitors and backline. This can be overcome by using ‘groundfills’ – full-range PA cabs placed under or beside the stage.

— Miking an orchestra that’s seated underneath a flown PA can be a problem – strings are fairly quiet, so the mike level needs to be high, increasing the risk of picking up spill from the PA, and even feedback. If it’s not feasible either to move the players or re-position the PA away from them, one solution is to alternate the polarity of neighbouring players’ mikes, to reduce (partly cancel out) the ambient soundfield. Alternatively you could use lapel (tie-clip) omni mikes taped to the rear of the string instruments’ bridges, which helps reduce spill on individual mikes.

— When using a revolving stage (not common, but used in some big-name productions) it is normal to reverse the stage’s direction after every two acts, to avoid twisting the multicore cable/snake. Multicore lines have been lost in this way before – effectively by strangulation.

This article is reprinted with permission from The Live Sound Manual, published by Backbeat Books, www.backbeatbooks.com. All information is copyrighted and cannot be reprinted without the permission of the publisher.

Staying In Synch Part II: Jitter Is A Four-letter Word by Bob Snelgrove

Tuesday, February 18th, 2003

Jitter is a word that we hear often and manufacturers often quote. When we hear “Low Jitter” we all know that this is a good thing – and it is. The causes and cures for jitter are particularly complex and many discussions of jitter are very misleading. A quick overview of jitter is important because it is the most common source of timing errors, poor synchronization and ultimately bad sound in the digital studio.

I am going to borrow Julian Dunn’s definition of jitter, which is: “The variation in the time of an event – such as a regular clock signal – from nominal.” The waveform representing the Word Clock signal is a textbook perfect example of a square wave. In the real world a “perfect” square wave would have a finite rise and fall time with some under and overshoot. At the end of a cable it would be somewhat less than square, and in many situations where long cable lengths or bad termination were involved it would not be square at all and would become a problem causer instead of a problem solver.

Jitter is – for whatever reason and however caused – when the transitions in the square wave do not line up in time to the expected or nominal periodicity compared to the expected frequency. Jitter is created when the edges, due to their distortions, are misinterpreted as timing signals. This is a dynamic condition and the timing information will be random and constantly changing sometimes ahead and sometimes behind. The word “jitter” is appropriate because this change from nominal is never a stable change; it is always one that varies. In effect the timing information jitters back and forth in time.

Many different conditions can cause jitter to be produced. One of these is any interference signals that modulate the Word Clock lines, for example a ground loop. The type and severity of jitter will be determined by the ground modulation as these ground signals can range from 60 Hz hum all the way up to dimmer noise.

To properly measure or observe jitter it is necessary to first have a stable frequency reference, and then a method of dynamically comparing this reference to the actual signal. Many people attempt to measure jitter by simply looking at a Word Clock or AES3 waveform on an oscilloscope. I have one magazine article that erroneously came to conclusions on several high-end Word Clock products using several completely faulty methods to make their jitter measurements and aural comparisons.
The reason we are concerned about jitter is because each internal clock in our studio looks for and locks onto the edge of the incoming Word Clock square wave and uses the distance between each transition to determine the operating frequency of the Word Clock which it then uses as its own timing reference.

If the edge of each Word Clock square wave is occurring at different times or the Word Clock receiver is triggering on it at different times then each clock will have slightly different time bases – exactly the problem we are trying to eliminate.

Jitter is also a concern because it can be caused in so many ways. It is common to find Word Clocks that generates very low jitter from their internal reference but pass on high amounts of jitter when locked to an external reference like Video Sync or Time Code.

Excellent lock between an external reference like SMPTE time code or Video Sync and Word Clock are only excellent if they lock without transferring any jitter.

If you have a master Word Clock generator in your studio and everything works beautifully until you lock your Word Clock generator to incoming video, then high jitter passed thru to your Word Clock outputs from the video is your problem.

Assuming that the master Word Clock generator has no significant jitter itself the problem then becomes one of minimizing jitter that can easily be induced or transferred into the timing signal as it moves from device to device and, in a larger facility, from room to room. This involves paying careful attention to cables, connectors and termination.

Next to poorly designed digital recording equipment and poorly designed master Word Clocks, cables, equipment interconnection and improper termination are the most common causes of timing errors because they are a direct cause of jitter. All Word Clock signals must be connected using RG59 coax cable and properly terminated. RG59 cable has a characteristic impedance of 75 ohms and the BNC connectors that go on it are specifically designed for use at this impedance. RG59 cable provides a signal path for Word Clock waveforms that minimize cable induced jitter when fed from a 75-ohm source and must be terminated with a 75-ohm termination to properly function.

Cable runs must be kept as short as possible and separate isolated Word Clock outputs should be used to feed each Word Clock input, if at all possible. The most jitter critical equipment in your studio are your A/D and D/A converters. To ensure the lowest possible jitter these must be physically located as close to the master Word Clock outputs as possible in order to benefit from short cable runs. To do this, mount your Word Clock generator near your converters in the same equipment rack. The metal grounding of the rack will also help minimize ground induced jitter.

While it is common to use BNC-T connectors to distribute a single Word Clock output to several devices, this must only be done after you are certain that the 75-ohm termination for each Word Clock input you are looping to can be shut off. Restrict the use of BNC-Ts and loop no more than three Word Clock inputs. Restrict your looped cable lengths between Ts to very short distances and remember that the last BNC-T in the chain must be terminated.

A Word Clock output cannot be terminated by more than one input. Double termination is a common problem when looping Word Clock using BNC-T connectors. One termination per Word Clock output is required for optimum performance. No termination or more than one termination per output will cause excessive jitter and seriously degrade performance. Beware of 50-ohm coax cables and connectors – they are not the same thing.
It is well accepted that Jitter introduced into the AES3 bit stream manifests itself in clearly audible sonic degradation. Low jitter generation of reference Word Clock timing signals, their jitter free synchronization to video and SMPTE and their proper distribution between digital audio devices throughout the studio will result in the precise timing of digital audio bit streams as well as deliver a noticeable improvement in the quality and subtlety of recorded and mixed music as well as the reliability of transfers, spotted sound effects and layback to video.

Bob Snelgrove is the President of GerrAudio Distribution and the Canadian Product Specialist for Audio Precision test instrumentation.

Safety Tips While On Tour

Wednesday, December 18th, 2002

It’s always a good idea for at least one member of the crew to be trained in first aid techniques. It may as well be you… In the meantime, there are self-help and precautionary steps that everyone involved in PA work can take. First of all, here are some quick tips on preventing back strain when lifting and moving heavy gear.

— First of all, if the object looks like it’s too heavy for one person to lift, just get some help. Forget macho – how macho is it to be laid out in a hospital bed in traction?

— If you are going to tackle it yourself, think of the following word: BACKUP. It stands for the following:
Back straight – don’t curve your spine
Avoid stretching – keep the object close
Clutch firmly – get a good secure grip
Knees bent – helps with balance
Use your legs – let them take the strain
Putting down – do it the same way

If you’ve wrenched something, bashed something, cut something or you’re just generally feeling poorly, think on this: Hospitals are no friends of minor complaints, and in some countries treatment is uncertain and expensive. Or you might be stuck on a festival site, feeling ill, but too badly needed to leave. Or say you witness a fellow crewmember lying injured, and there’s no one else to help them…

Assistance could be at hand, in the form of a book like The Family Guide to Homeopathy by Dr. Andrew Lockie, which has some sound advice on first aid and ‘bodily disorder’ treatment, using homeopathic remedies where appropriate. The remedies listed can be safely self-prescribed and are low-cost. A basic first aid kit of about 20 types of ‘remedy’ pills, one tincture and five creams covers most situations – from burns, crush injuries, weird food poisoning, sprains, smog fumes and all manner of other minor troubles that stop you from giving 100 per cent.

Of course, if the injuries are plainly serious, or first aid doesn’t ease matters fairly quickly, or symptoms worsen, immediate hospitalization is advisable.

This article is reprinted with permission from The Live Sound Manual, published by Backbeat Books, www.backbeatbooks.com. All information is copyrighted and cannot be reprinted without the permission of the publisher.

Staying In Synch – Part I: Word Clock Explained by Bob Snelgrove

Wednesday, December 18th, 2002

Everyone using more than one piece of digital audio equipment should be concerned about the quality of their studio’s Word Clock. This article will explain the critical role that Word Clock quality and distribution plays in the digital audio environment and the audible effects that poor clocking has on digital music systems. Examples of common mistakes and suggestions for proper hook-up will be made.

I need to start off by saying that when it comes down to quality of sound, I tend to approach the wonders of digital audio with a healthy degree of caution. I love my CD player and I personally own and enjoy lots of other digital audio toys. Nevertheless there are still many problems with the digital representation of audio that have yet to be solved. All of these problems relate to digital audio’s sonic accuracy and transparency.
At best, well executed word clock generation, synchronization and distribution will make your studio sound audibly better and allow you to become a better artist, engineer or producer. At worst it will simply eliminate countless technical gremlins that will make you inefficient, ruin your mixes and drive you crazy.

Timing
Despite the many similarities, compared to analog audio, digital audio has a couple of unique and major differences. The first is that an analog audio signal is a continuously varying signal, which is represented digitally by a limited number of discrete numerical values. The second is that these numerical values represent the analog signal only at specific points in time or sampling instants rather than continuously at every moment in time. Sampling instants are determined by various devices and processes, the most critical being Analog to Digital and Digital to Analog conversion. Converters are responsible for transforming an analog signal into a digital representation and back again, this is where it all starts and this is where it all ends.

A sample clock determines when these sample instants occur. All digital audio devices have some form of a sample clock to control their internal sample rate or sampling frequency. In a studio where we integrate many different pieces of equipment that all depend on their own clocks to function we will invariably have sample instants taking place at different times unless we synchronize all the clocks in each piece of equipment and tie the timing of these events together.

This synchronous timing is required because unlike analog audio, digital has a discrete time structure consisting of individual samples. Successful communication between different digital audio devices or the mixing of different digital audio signals together will fail if each device is not producing their bits of data in precise co-ordination with each other.

Poor quality timing between multiple digital audio devices or the improper distribution of those timing signals will result in non-synchronous operation and the result will be the creation of random and highly audible artefacts, often described as clicks, pops or glitches.

There are two standard timing signals used to synchronize the internal sample clocks of digital audio equipment. The first, commonly used in large post-production and broadcast facilities is the AES3 Digital Audio Reference Signal or DARS for short. This bi-phase signal’s carrier is exactly the same as a balanced AES3 signal but no audio data (digital zeros) in the data stream. An XLR sync input can usually be found on pieces of high-end audio equipment and workstations. DARS is distributed the same way as AES3, via balanced 110-ohm digital audio cable and AES3 XLR type connectors.

There are two things that make the DARS or Audio Black signal particularly attractive. The first is its high frequency of operation, which is between 2 and 3 Mbits per second. The second is the fact that the professional AES3 interface is balanced and ground isolated making it relatively immune to induced noise, which can be a major source of jitter.

The most common clock distribution method however is Word Clock. The Word Clock waveform is a simple unbalanced square wave. Word Clock is designed to be distributed on 75-ohm, unbalanced coax cable terminated with BNC connectors. In order for synchronous operation to take place all digital audio devices must be fed one of these timing signals from a master reference Word Clock time base. These are typically referred to as Word Clock generators and sometimes as synchronizers depending on the functionality they provide. Great care must be taken to carefully distribute these Word Clock signals to each piece of digital equipment in the studio or the timing signals will be degraded and audio quality will suffer.

Word Clock Generation
The most basic requirement for a Word Clock generator is that it must be able to produce high quality stable square waveforms. The square waveform will need to be at one of two frequencies of either 44.1 or 48 kHz also referred to base frequency or Fs. The generator must also be able to produce industry standard multiples of either of these two base frequencies yielding Fsx1 thru Fsx4. This x1 thru x4 multiplication generates the common sample frequency sets we are all familiar with in digital audio and generates Word Clock frequencies in the range of 44.1 to 192 kHz.

A special case of declining interest and usefulness is Digidesign’s proprietary SuperClock at FsX256 or a frequency between 11.289 and 12.288 mHz. Because different manufacturers design their products to accept different multiples of Word Clock base frequencies, all multiples from one to four must be supported, and the master word clock generator must be able to produce different Word Clock frequencies from different outputs simultaneously.

It is important to note that not all BNC word clock outputs are created equal. When distributing Word Clock signals, isolation and proper source impedance from each output BNC is important. If the outputs are simply fed from one low impedance source then it will be impossible to correctly terminate a line and a single bad cable or poor connection will reflect back and compromise the performance of every other line. This is one condition that would also create jitter.

Join us next issue, where we continue are look at synchronization in the digital world, tackling the topic of jitter.

Bob Snelgrove is the President of GerrAudio Distribution and the Canadian Product Specialist for Audio Precision test instrumentation.

The Musicality of Mastering by Marisa T. Déry

Wednesday, December 18th, 2002

In this article I will be writing about “The Musicality of Mastering”. Although I will touch on some technical issues, I’d like to focus on the creative process of mastering. The mastering engineer’s role seems to be changing a bit. Whereas before a person would walk into the room and I would EQ it in the best way that I could (adjusting levels etc.) now I’m actually putting more and more special effects in the mix – record noise, backwards snare, flange on a section of a song (à la Britney) – people are asking for my input.

First, I would like to touch on a much talked about subject amongst Mastering Engineers: L-O-U-D-N-E-S-S

Play a CD that is five years old, then play a new release, and you will hear that the difference is staggering.

Ex. Marvin Gaye’s “I Want You” (Marvin Gaye’s Greatest Hits, Motown) then Linkin Park’s “One Step Closer” (Hybrid Theory, Warner Bros.) then Marvin Gaye again.

What is happening now is that music is getting louder and louder at the expense of dynamic range. In the early ’90s, the reference level was -12dB on most DAT players, which is why many old players had a line at -12. Then came the finalizer and people began setting their levels to 0. The problem was that every DAT player manufacturer had a different reference level for 0. Makers of consumer DATs would set the meters hot so that inexperienced users wouldn’t distort their recordings. 0 wasn’t “0” anymore. The finalizer made things worse because you could set the mix with an OUT ceiling of -0.3dB (which is the recommended maximum for CDs), yet still make your program louder and louder (while still remaining at -0.3dB).

The question is, “When is loud TOO LOUD?” All that I can say is that you need to leave room for the music to breathe. People are handing me mixes at 0dB, because the engineer cranked up the finalizer or the limiter conveniently located in the studio. Engineers are concerned that their clients won’t be impressed with their skills, so they give them a “finalized” mix where there is absolutely no room for me to do anything. 0dB is also dangerous because many CD-burner towers assume that if the program is peaking at 0.0dB it must mean that it is overloading, and promptly rejects all the CDs being duplicated (it’s quite impressive to see all those CDs popping out with flashing lights by their side).

A Good Mix
A mastering engineer relies on getting a good recording and mix to do his/her job properly. Too often musicians run out of money after the mix, and are never really satisfied with their mix. Your job is for them not to remember that they didn’t like their mix.

Analog vs. Digital EQ
There is importance in combining noise to the chain going to the DAW. Generally, I will extract a mix from a CD with the Adaptec Toast Extractor into the computer. It will be imported into Pro Tools where I have an endless amount of Audiosuite plug-ins (bells and whistles). This faster technique is used because the client wants everything to be done as quickly as possible. “Extracting” unfortunately (fortunately?) is faster than real-time loading. What I prefer is a DAT or CD master, where I will patch it into an analog EQ then go to Pro Tools. Those mastered mixes, to me, sound human.

There is a breath in those mixes that I cannot replicate with digital processing; there is a noise, a life to those mixes. One must never forget that what you are mastering is music. An artist puts time, energy, emotion and passion into those songs. Out of respect to the artist and the music, you have to make that mix breathe and come alive. You can’t process it to such an extreme that there is no dynamic range, no peaks and valleys, no life. It’s just a block of noise, a block that you can beautifully see in Pro Tools or any other program (ex. the L1 set at 12dB threshold).

Audio Restoration
Another thing that I would briefly like to touch upon is Audio Restoration. Whether you are dealing with old reels or 78 RPM records try to make them sound as natural as possible. There are many outboard EQs and software plug-ins for that purpose. The Waves restoration package is one that I use a lot. Yet even there, you must listen with musician’s ears.

Resist the temptation to get rid of the entire hiss, especially with orchestral music! It’s not only about the sonic quality; it’s also about the music. Be creative when you are working on these programs. I have a little Casio keyboard at work and when I can’t figure out what frequency is humming at full volume (I’m stuck and/or tired), I’ll grab the Casio and find the note on the keyboard. I have a chart that associates the notes of a piano keyboard with frequencies, so if the note (or hum) is Middle C, I’ll look at the chart, and find that I need to notch out 261.63 Hz – it’s a start.

Which brings us to the creative side of mastering …

Recently, I did a project where the artist came up to me and said, “I have three songs that are mixed, and one that is unfinished. I have an appointment with the A&R rep at DreamWorks on Friday … help me.” So I listened to the said “unfinished” song and began throwing suggestions.

– Why not throw a Janet Jackson type drum loop at the head?
– During the Tag section at the end “chorus” it.
– When it comes back “flange” it.
– In the beginning, listen to the lyric. Play with it.
– Pan to the left when you say, “left”.
– Pan to the right when you say “right”.
– Make it move.

I am now listed as “Remix Engineer” …

There are a lot of plug-ins that you should play with – too many to cover them all, but the L1 Ultra-maximizer (now L2 Ultra-maximizer) must be mentioned, as it is now a must in all productions. Before it was recommended that one should set the attenuation meter/setting between -3dB and -6dB; now anything goes, so all you can really do is match the levels of a new release.

But be aware! It has been said (AES Conference, New York, 2001) that 9 out of 10 songs on Billboard’s Top 10 are distorted, and that songs from the 1970s sound technically better and have more dynamic range than songs released in 2001-2002. We have become a generation of “distorted” listeners (it’s no wonder that teenagers today will be partially deaf by the time they reach 30). Hopefully 5.1 technology might help ease the loudness wars.

In Summary
I’m writing this article because of my concern with where we are going with the loudness wars. I am an engineer by trade, and a musician by birth. I have pursued a career in engineering because of my unquestionable passion for music. I respect Creators and Performers that bare their soul to tape. They rely on the Basics Engineer, the Overdubs Engineer, the Assistant Engineers, the Mixing Engineer and the Mastering Engineer to preserve the integrity of their music to tape. It is our duty to understand their music, their art.

It’s not just about putting a mic in front of an instrument and pushing the record button, or adding highs and lows in the mastering process. It’s about understanding what you are recording, mixing or mastering. Using your instincts to make it sound right. As a technician, your job is to make the music sound as sonically perfect as possible; as a human being, your job is to make the music sound as human as possible (with or without noise).

Marisa T. Déry, a native of Ottawa, Canada, is Chief Mastering Engineer at the Tape Complex in Boston, MA. Her clients include the Mighty Mighty Bosstones, Tugboat Annie, Scientific, Chapter In Verse and RUSHYA.

Contact

4056 Dorchester Rd., #202,Niagara Falls, ON
Canada L2E 6M9 Phone: 905-374-8878
FAX: 888-665-1307 mail@nor.com
Web Site Produced by Norris-Whitney Communications