header image header image

Archive for the ‘Uncategorized’ Category

Less Is Best When Recording Tracks by Mike Fraser

Saturday, June 19th, 2010

As a mixer, a problem I continually encounter is a song’s track count. I sometimes receive projects that have over 240 tracks. With 64 outputs in the Pro Tools rig I use, there will be a lot of combining tracks together before I can hear all of the musical sections as intended.

When recording in the early days, only a single microphone was placed in a room. To balance the music, the players were placed around the mic. Loud instruments like drums and brass would be placed further away; softer instruments like acoustic guitars or vocals would be placed closer to the mic. The end result was a live performance properly balanced on one mono track.

Next came the era of multi-track recording. Four, eight, and eventually 16-track recorders came into being. The engineer and producer would laboriously strive to balance between capturing that magical performance and getting the right blend to tape. For example, The Beatles’ “I Want To Hold Your Hand” was recorded on four tracks and “Hey Jude” was recorded on eight tracks. Final mixing was easy as everything would have been “pre mixed” due to lack of tracks during the recording. Soon, 16-track machines gave way to 24-track machines and finally, in the heyday of analog recording, two or more 24-track machines were synced together to create 48 or more tracks. As you can imagine, 24 to 48 tracks created a much more involved mixing process.

Today, we virtually have no limit to how many tracks are recorded. Instead of working on the balance of multiple microphones to achieve the blend desired, we now record each microphone onto separate tracks. The final balance decision is left until much later.
More of these decisions should be made while recording and committed to as the performance is happening – not to leave it up to the mixer to magically divine what the artist and producer were trying to capture during the recording process. As a general guide, I would say 50-60 tracks should be the maximum number a session should have. Less is even better. That way, all the production decisions are made and a mixer isn’t spending expensive time bouncing tracks and editing.

Mike Fraser is an engineer/mixer whose recent credits include: AC/ DC’s Iron Man 2 Soundtrack, Airbourne, Melissa Auf der Maur, Jets Overhead,
Franz Ferdinand, Hail The Villain, Chickenfoot, Elvis Costello, Die Mannequin, Sam Roberts, and Mariana’s Trench.

Mastering in the 21st Century Louder Than God Intended by Bryan Martin

Saturday, June 19th, 2010

What can I say? Louder wins. So in the spirit of the 21st century, I have been experimenting with extreme volume mastering and, yes, I can do that (eek ack). Like Bob Ludwig said, “I used to work really hard at making records sound good; now I just make them loud.” You want it as loud as Metallica or U2? No problem. It does help if all the dynamics and transients have not been obliterated by the machismo of the mix bus limiter. A brick is a hard thing to swallow, and even harder to master regardless of sexual prowess. Honey, where did you put my volume knob?

It would appear that in the new i-Reality, mastering is about volume. Many wax nostalgic of the halcyon days of analog, tape, and studios (does anyone remember laughter or large format consoles, or a chief tech?), but lets get real kids: no one is accusing modern recordings of sounding great. Every basement has a studio, and a bathroom. Abbey Road simply cannot exist in your laptop.

Thankfully there are still a few refugees from the lost world fighting the extinction of fidelity in a digitalia loaded with distortion, MP3s, and earbuds. I guess music and passion are kind of like a bad teenage crush or heroin. I am still mastering with custom-built uber-fi tube gear and designing more. Who doesn’t get all doe-eyed at the thought of the birthing of their musical baby through those lovely glowing valves and hunks of iron (4 per cent silicon steel, actually)? It’s big. It’s industrial. Hey, can you do a Vulcan mind-meld on that thing? And everything that leaves here sounds better than when it came in.

As far as pricing goes, if the session is unattended and payment is immediate, I can accommodate any budget. So I hope to see all of you in the brave new race-to-the-bottom, or should I say, over-the-top-of-digital-zero world of: Mastering in the 21st Century (this should be said by Powdered Toast Man). Louder is louder.

Grammy Award-winning mastering engineer Bryan Martin can be found at Sonosphere Mastering, www.sonosphere.ca, or in the lab building
oversized tube gear that is not street legal in most first-world countries.

Controlling Feedback Onstage Using Phase To Your Advantage Part 2: The Interaction Between Speakers by Peter Janis

Monday, April 19th, 2010

When an acoustic guitar is used onstage, it is usually connected via a direct box that splits the signal to the onstage amplifier and the PA system. The PA then will split the signal again to drive wedge monitors and the main house sound system. When all of these loudspeakers are blasting at the same time, they interact. In fact, they mostly interact in the bass region where the longer, low frequency sound waves meet to either reinforce each other or cancel each other out. This effect is known as modal distortion. Recording studios commonly employ bass traps to reduce hot spots known as room modes. These are exaggerated depending on the room geometry or the room’s natural resonant frequency. And guess what … room modes, like gravity, exist everywhere including on a live sound stage.

Here’s what happens: You play a chord on the guitar and, depending on where you are standing, the sound waves from the wedge monitor and the PA system will either amplify each other if they are in phase or cancel each other out if they are out of phase. When they are in phase, the resulting amplitude at that particular frequency will increase or even double depending on where you are standing. If you find that a certain frequency is feeding back when you stand in front of your monitor, in all likelihood, you are experiencing two or more waves that are combining, causing a resonant feedback problem. There is absolutely no point trying to figure it all out by calculating the phenomena as this will occur based on a host of variables such as the PA system, the monitors, the size of the room, the room acoustics, and so on. But you can try reducing feedback by following this simple procedure.

First, start by eliminating unneeded bass frequencies by rolling off the low end below 100 Hz. This is the one fix that you should absolutely consider before doing anything, as low frequencies are the primary problem with resonant feedback. Bass below 300 Hz is considered to be omni-directional, meaning that it will be everywhere. By eliminating excessive low end, you make the task of controlling feedback easier. There is also another benefit – ever notice that it is way easier to get feedback from an electric guitar when the sound is distorted? Guess what. Like gravity and modal distortion, the same laws of physics apply everywhere. So, if your acoustic guitar is distorted, you will get more feedback. To eliminate distortion, make sure you use a high-quality direct box that is able to handle transients without choking. Since most of the sound energy is contained in the bass, when you roll off the low end, you are actually making it easier for the buffer or amplifier inside the DI box to work. Less distortion = less feedback.

Now that you have rolled off the bass, you are now ready to turn up your PA system and monitors. Start playing chords and let the guitar ring. Turn your system up until it begins to resonate. Now, take a step away from in front of your wedge monitor to see what happens. Now move sideways.

As you move around, the feedback character will change. This is because you are in the middle of a multitude of room modes. If the feedback is most active near the monitor, try moving the monitor electronically by reversing the electrical phase. Most professional DI boxes have a 180-degree polarity reverse switch to enable you to do this. What you are doing basically is causing the modal distortion to change. This can often move a phase-adding mode from where you are standing which can help reduce feedback.

Another possible fix is to “imply move” the wedge monitor away from where it is so that the physical relationship changes. If you have an instrument amp on stage, moving it back a few inches can also help. This will cause different frequencies to either amplify each other or cancel each other out depending on where you stand. Point being, we have yet to EQ the sound, but are dramatically shifting the way the natural sound will interact so that we minimize feedback naturally. Once you have maximized the output, you can then fine-tune your system using the EQ.

Peter Janis is the President of Radial Engineering, the PortCoquitlam, BC-based manufacturer of music and audio equipment. Visit www.radialeng.com for more information.

It’s All In The Ears by Laurence Currie

Monday, April 19th, 2010

While teaching as a guest speaker at Dalhousie University and at the community college in Halifax, I had so many students ask, “What setting do you use on that?” I would have to tell them every time, “I don’t have a setting. The setting is whatever my ears tell me it should be.”

To think that every single base-track has to be used through a particular type of compressor EQ and have this on it or that on it is a total misnomer. It’s on a case-by-case contingency. Anyone who’s thinking about becoming an engineer should either find someone who’s willing to tutor you, or, find a reputable place where you can learn a little bit about it. I originally learned the trade of sound engineering from a school that relied very heavily on technical knowledge. If you want to become a really good engineer, you have to know all of that stuff. Above and beyond that, it’s a lot of experience, a lot of trial and error. The most important tools you have are your ears. Using them is the main thing – and have a good head on your shoulders that houses those ears.

Laurence Currie is a professional sound engineer, and Co-Host of MasterTracks, currently airing on AUX.tv.

Controlling Feedback Onstage Using Phase To Your Advantage: Part 1 by Peter Janis

Monday, April 19th, 2010

Anyone who has played an acoustic instrument onstage knows that feedback can be a serious problem. What few realize is, there are solutions beyond radically altering the EQ. The following looks at the various problems and solutions at hand. For simplicity, we will discuss an acoustic guitar – but the same principles apply equally to a violin, mandolin, banjo, and contra-bass.

Before we get too far ahead of ourselves, we need to first identify the problems. There are basically two types of feedback that occur onstage: high-frequency whistles and1low-frequency resonance. Both are caused by the sound emanating from the loudspeaker being so loud that it overtakes the instrument by feeding itself back through the pickup, causing the sound to feed back into the PA system forming an audio loop. This endless cycle is called a feedback loop, or feedback for short.

High-frequency feedback is often caused by sound from a wedge monitor going directly into the instrument’s microphone. This can also occur with piezo-type transducers. The usual fixes for high-frequency feedback are to turn down the volume, reposition the microphone, or employ some form of equalization to eliminate the problem frequency.

Low frequency feedback occurs when bass energy from the speaker system causes the instrument to vibrate. This is also known as resonant feedback. The sound system causes the soundboard (top of the guitar) to vibrate in sympathy with a particularly loud bass frequency. The vibration is picked up by the instrument pickup and recycles itself as feedback. Some musicians will seal the sound hole using a rubber plug. This can reduce feedback, but also degrades the sound quality of the instrument.

The Down Side To Using EQ To Solve Feedback Problems:
The most common approach to eliminating feedback is to use some sort of notch filter to find the offending frequency and remove it with a narrow band EQ. The problem with this approach is it winds up as a catch 22. For instance, if you reduce midrange to eliminate feedback, you are actually removing the “meat” or most important part of the sound out of the monitors. To make up for the loss in the midrange, folks invariably increase the stage volume and guess what … more feedback.

To make matters worse, some will introduce a form of automatic feedback filtering system to “magically” solve the problem. These devices introduce a series of very narrow filters that rapidly move around to squash feedback as soon as it occurs. The resulting sound is best described as comb-filtered, an effect that studios spend thousands trying to eliminate!

Does this mean that using an equalizer is bad? Of course not. The number one rule with EQ is and will always be less is best. In some situations, the only option you may have will be to introduce some radical EQ curves into your monitors, but when you do so, keep in mind that you are moving further and further away from the natural sound of the instrument instead of actually solving the problem.

Peter Janis is the President of Radial Engineering, the Port Coquitlam, BC-based manufacturer of music and audio equipment. Visit www.radialeng.com for more information.

Choosing a USB Audio Interface by Alec Watson

Monday, April 19th, 2010

Computer recording keeps getting easier and more accessible.

Just a few short years ago, in order to do any “real” recording, one needed some kind of expensive internal controller card (and the guts to break open their computer to install it), a digital converter, and some good outboard microphone preamps; and we’re not even touching on the gear necessary to monitor your music. Today, there are so many choices for getting pretty darned good audio into your computer (at a good price) that it has once again become a little confusing when it comes to making the right choice. In fact, I was “Ebaying” last night and found a “Professional Engineer” who is willing to sell you his thoughts on purchasing the “right” USB audio interface. Not that I want to go denying dude his Ebay income, but as a little gift from CM to you, save your $16 US, (you can apply it to your new interface) here is what you need to know…

USB or Firewire?
I am almost certain I am going to get some hate mail from some better informed tech guy as to why I am wrong, but the honest truth is – it doesn’t really matter. That said there are a few considerations. No, USB and Firewire aren’t going to sound any different, but there may be some usage differences. If you have a computer that has all sorts of USB peripherals plugged in – printers, hard drives, card readers, USB Coffee Maker … and you have a Firewire port sitting empty, then it would probably be wise to go with a Firewire audio interface; you will never receive the dreaded “USB device not recognized” message AND you are likely to be able to achieve lower latencies due to less bus traffic … if that sounds like a bunch of techno crap, apart from the fact that it is (techno crap), rest assured I will explain it later so that you too can impress your friends!

On the flip side, I would tend to go with a USB interface if I was using it with my laptop. Yes, my laptop does have a Firewire port, but it also has six USB ports. A lot of the USB interfaces run off the power supplied by the USB port and as I don’t have a lot of USB peripherals plugged into my computer and I don’t want to carry a wall wart (power adapter) around with me, the USB interface is likely the more robust choice when it comes to powering external devices from my laptop.

Latency – What The Heck Is It And Why Do I Care?
Between the manufacturers of the USB audio interfaces there is a lot of hype about latency. Latency, in practical terms, is the delay that occurs between the moment your audio enters the interface, travels to the CPU (the main processing chip in your computer), is processed (effects and/or EQ that are applied to your audio), and then returns to your USB audio interface to be played by your speakers or headphones. Some USB interfaces have lower latencies than others; for me, however, any latency is too much! I prefer to “direct monitor”; most interfaces achieve zero latency times through this process. Direct monitoring really means that the USB interface is really splitting the audio into two paths, one path goes to your computer, the other goes directly to your headphones; the result is zero latency. The drawback is that you won’t be able to hear your vocal or guitar etc. with any of the cool effects that your computer can apply to them. For me, I would rather hear my voice dry than gooped up with effects and late.

Alec Watson is a Producer/Engineer that lives in Reno-hell Vancouver Island. He can be contacted at alec@alecwatson.com.

Contact

4056 Dorchester Rd., #202,Niagara Falls, ON
Canada L2E 6M9 Phone: 905-374-8878
FAX: 888-665-1307 mail@nor.com
Web Site Produced by NWC

"