header image header image

Check us out on...

Archive for the ‘Uncategorized’ Category

Turn That Amp UP! by Gordie Johnson

Thursday, August 19th, 2010

I hear a lot of guitar players pluggin’ into a lot of devices to try and find their sound. There seems to be a trend now in our industry where the soundman is the ruler. Guitar players have to turn down their amps, drummers play behind plexiglass shields, and guys use in-ear monitors so that the sound can be controlled. I’m sorry, but that is fucking boring if you ask me.

I don’t know one artist that sounds better for doing things this way. If I want to hear that kind of audio, I’ll go see Phantom at the Winter Garden Theatre, or I’ll go see Cats or something. It is such a Broadway theatre approach. Rock ’n’ roll has got nothing to do with that. Plug into an amp and turn that shit up.

Soundmen, you can do this. You can make the club sound awesome with a loud-ass band playing on stage. After all, that’s your job. Soundmen have done it for decades. If you want to get a gig in a studio, get a job as a studio engineer. Live engineers have to deal with high SPLs. I just don’t think all of the technology they throw at live audio makes it sound better. Some things are advanced: digital desks make festival stages way better than they ever were and wireless technology has made things easy for big festivals, but I really don’t know any rock ’n’ roll band worth the time and effort that plays according to the soundman’s rules.

Capturing The Natural Sound Of A 300 Year Old Cello Part 1 by Ron Searles

Thursday, August 19th, 2010

When first setting out to make this recording – Canadian cellist Winona Zelenka recording Bach’s complete Cello Suites – I realized that capturing the natural sound of the cello was paramount. Winona is playing the “Starker” Guarnerius cello, made in 1707. It is one of the finest cellos in the world today. Our first challenge was finding the ideal place to record. We needed a great sounding space, with few noise interruptions – a constant problem in most churches. Because this was to be a long-term recording commitment, the space needed to be available over a long period of time.

As luck would have it, a friend who had hosted many afternoon chamber music performances in his house was very enthusiastic to help us out. He has a very large, beautiful home north of Ajax, ON that fulfilled all of our requirements.

We started by doing numerous trial recordings in various locations – living room, front lobby, upstairs balcony overlooking the foyer, etc. – finally settling on a very large room at the back of the house. The dimensions of the room are about 35 ft. by 50 ft. with about a 25 ft. height. It has a clay tile floor with sloping cedar ceiling and glass walls; the acoustics are very similar to those of a small church. The cello sounded wonderful!

To capture this sound, I chose a custom-matched set of very high-quality ribbon mics. They seemed to “hear” the cello in a way I really liked – no hype, very natural, with good off-axis response, making the room sound lovely as well. The “figure 8” pattern of the ribbon gives a nice pickup of the distant and denser reflections from the back of the room, while eliminating the closer hollow sounding reflections from the sides. I’m not a fan of adding any artificial reverb for this type of recording, so the way the mics respond to the room’s reverb is crucial for me.
Next came the mic placement. We wanted an intimate sound, but with some bloom from the room – similar to what Winona might hear while playing the cello in a hall. For a number of chamber music and film score recordings, I’ve employed a three-mic array, based on the Decca Tree for the core set-up. Experimentation yields the best final results, but I start with the left and right mic about 6 ft. apart, the centre mic dead centre but about 2 ft. forward of the left and right mics, and in this case, about 4 ft. from the cello’s strings, a bit above the contact point of the bow. Any asymmetry (even a fraction of an inch) will throw off the left to right balance. The 3 mics create a good stereo image, with a more control of centre image than just using a stereo pair.

See Part 2 of Ron’s article in the October 2010 issue of Professional Sound.

Ron Searles is a three-time Gemini Award winning recording engineer, with an additional six nominations. He has hundreds of album credits from all music genres and has recorded and mixed the scores for many award winning feature films including The Sweet Hereafter, Being Julia, and Capote. Ron is employed as a Senior Post Audio Engineer at CBC, his most recognizable work there being the current theme to Hockey Night in Canada.


Less Is Best When Recording Tracks by Mike Fraser

Saturday, June 19th, 2010

As a mixer, a problem I continually encounter is a song’s track count. I sometimes receive projects that have over 240 tracks. With 64 outputs in the Pro Tools rig I use, there will be a lot of combining tracks together before I can hear all of the musical sections as intended.

When recording in the early days, only a single microphone was placed in a room. To balance the music, the players were placed around the mic. Loud instruments like drums and brass would be placed further away; softer instruments like acoustic guitars or vocals would be placed closer to the mic. The end result was a live performance properly balanced on one mono track.

Next came the era of multi-track recording. Four, eight, and eventually 16-track recorders came into being. The engineer and producer would laboriously strive to balance between capturing that magical performance and getting the right blend to tape. For example, The Beatles’ “I Want To Hold Your Hand” was recorded on four tracks and “Hey Jude” was recorded on eight tracks. Final mixing was easy as everything would have been “pre mixed” due to lack of tracks during the recording. Soon, 16-track machines gave way to 24-track machines and finally, in the heyday of analog recording, two or more 24-track machines were synced together to create 48 or more tracks. As you can imagine, 24 to 48 tracks created a much more involved mixing process.

Today, we virtually have no limit to how many tracks are recorded. Instead of working on the balance of multiple microphones to achieve the blend desired, we now record each microphone onto separate tracks. The final balance decision is left until much later.
More of these decisions should be made while recording and committed to as the performance is happening – not to leave it up to the mixer to magically divine what the artist and producer were trying to capture during the recording process. As a general guide, I would say 50-60 tracks should be the maximum number a session should have. Less is even better. That way, all the production decisions are made and a mixer isn’t spending expensive time bouncing tracks and editing.

Mike Fraser is an engineer/mixer whose recent credits include: AC/ DC’s Iron Man 2 Soundtrack, Airbourne, Melissa Auf der Maur, Jets Overhead,
Franz Ferdinand, Hail The Villain, Chickenfoot, Elvis Costello, Die Mannequin, Sam Roberts, and Mariana’s Trench.

Mastering in the 21st Century Louder Than God Intended by Bryan Martin

Saturday, June 19th, 2010

What can I say? Louder wins. So in the spirit of the 21st century, I have been experimenting with extreme volume mastering and, yes, I can do that (eek ack). Like Bob Ludwig said, “I used to work really hard at making records sound good; now I just make them loud.” You want it as loud as Metallica or U2? No problem. It does help if all the dynamics and transients have not been obliterated by the machismo of the mix bus limiter. A brick is a hard thing to swallow, and even harder to master regardless of sexual prowess. Honey, where did you put my volume knob?

It would appear that in the new i-Reality, mastering is about volume. Many wax nostalgic of the halcyon days of analog, tape, and studios (does anyone remember laughter or large format consoles, or a chief tech?), but lets get real kids: no one is accusing modern recordings of sounding great. Every basement has a studio, and a bathroom. Abbey Road simply cannot exist in your laptop.

Thankfully there are still a few refugees from the lost world fighting the extinction of fidelity in a digitalia loaded with distortion, MP3s, and earbuds. I guess music and passion are kind of like a bad teenage crush or heroin. I am still mastering with custom-built uber-fi tube gear and designing more. Who doesn’t get all doe-eyed at the thought of the birthing of their musical baby through those lovely glowing valves and hunks of iron (4 per cent silicon steel, actually)? It’s big. It’s industrial. Hey, can you do a Vulcan mind-meld on that thing? And everything that leaves here sounds better than when it came in.

As far as pricing goes, if the session is unattended and payment is immediate, I can accommodate any budget. So I hope to see all of you in the brave new race-to-the-bottom, or should I say, over-the-top-of-digital-zero world of: Mastering in the 21st Century (this should be said by Powdered Toast Man). Louder is louder.

Grammy Award-winning mastering engineer Bryan Martin can be found at Sonosphere Mastering, www.sonosphere.ca, or in the lab building
oversized tube gear that is not street legal in most first-world countries.

Controlling Feedback Onstage Using Phase To Your Advantage Part 2: The Interaction Between Speakers by Peter Janis

Monday, April 19th, 2010

When an acoustic guitar is used onstage, it is usually connected via a direct box that splits the signal to the onstage amplifier and the PA system. The PA then will split the signal again to drive wedge monitors and the main house sound system. When all of these loudspeakers are blasting at the same time, they interact. In fact, they mostly interact in the bass region where the longer, low frequency sound waves meet to either reinforce each other or cancel each other out. This effect is known as modal distortion. Recording studios commonly employ bass traps to reduce hot spots known as room modes. These are exaggerated depending on the room geometry or the room’s natural resonant frequency. And guess what … room modes, like gravity, exist everywhere including on a live sound stage.

Here’s what happens: You play a chord on the guitar and, depending on where you are standing, the sound waves from the wedge monitor and the PA system will either amplify each other if they are in phase or cancel each other out if they are out of phase. When they are in phase, the resulting amplitude at that particular frequency will increase or even double depending on where you are standing. If you find that a certain frequency is feeding back when you stand in front of your monitor, in all likelihood, you are experiencing two or more waves that are combining, causing a resonant feedback problem. There is absolutely no point trying to figure it all out by calculating the phenomena as this will occur based on a host of variables such as the PA system, the monitors, the size of the room, the room acoustics, and so on. But you can try reducing feedback by following this simple procedure.

First, start by eliminating unneeded bass frequencies by rolling off the low end below 100 Hz. This is the one fix that you should absolutely consider before doing anything, as low frequencies are the primary problem with resonant feedback. Bass below 300 Hz is considered to be omni-directional, meaning that it will be everywhere. By eliminating excessive low end, you make the task of controlling feedback easier. There is also another benefit – ever notice that it is way easier to get feedback from an electric guitar when the sound is distorted? Guess what. Like gravity and modal distortion, the same laws of physics apply everywhere. So, if your acoustic guitar is distorted, you will get more feedback. To eliminate distortion, make sure you use a high-quality direct box that is able to handle transients without choking. Since most of the sound energy is contained in the bass, when you roll off the low end, you are actually making it easier for the buffer or amplifier inside the DI box to work. Less distortion = less feedback.

Now that you have rolled off the bass, you are now ready to turn up your PA system and monitors. Start playing chords and let the guitar ring. Turn your system up until it begins to resonate. Now, take a step away from in front of your wedge monitor to see what happens. Now move sideways.

As you move around, the feedback character will change. This is because you are in the middle of a multitude of room modes. If the feedback is most active near the monitor, try moving the monitor electronically by reversing the electrical phase. Most professional DI boxes have a 180-degree polarity reverse switch to enable you to do this. What you are doing basically is causing the modal distortion to change. This can often move a phase-adding mode from where you are standing which can help reduce feedback.

Another possible fix is to “imply move” the wedge monitor away from where it is so that the physical relationship changes. If you have an instrument amp on stage, moving it back a few inches can also help. This will cause different frequencies to either amplify each other or cancel each other out depending on where you stand. Point being, we have yet to EQ the sound, but are dramatically shifting the way the natural sound will interact so that we minimize feedback naturally. Once you have maximized the output, you can then fine-tune your system using the EQ.

Peter Janis is the President of Radial Engineering, the PortCoquitlam, BC-based manufacturer of music and audio equipment. Visit www.radialeng.com for more information.

It’s All In The Ears by Laurence Currie

Monday, April 19th, 2010

While teaching as a guest speaker at Dalhousie University and at the community college in Halifax, I had so many students ask, “What setting do you use on that?” I would have to tell them every time, “I don’t have a setting. The setting is whatever my ears tell me it should be.”

To think that every single base-track has to be used through a particular type of compressor EQ and have this on it or that on it is a total misnomer. It’s on a case-by-case contingency. Anyone who’s thinking about becoming an engineer should either find someone who’s willing to tutor you, or, find a reputable place where you can learn a little bit about it. I originally learned the trade of sound engineering from a school that relied very heavily on technical knowledge. If you want to become a really good engineer, you have to know all of that stuff. Above and beyond that, it’s a lot of experience, a lot of trial and error. The most important tools you have are your ears. Using them is the main thing – and have a good head on your shoulders that houses those ears.

Laurence Currie is a professional sound engineer, and Co-Host of MasterTracks, currently airing on AUX.tv.


4056 Dorchester Rd., #202,Niagara Falls, ON
Canada L2E 6M9 Phone: 905-374-8878
FAX: 888-665-1307 mail@nor.com
Web Site Produced by NWC