header image header image

Archive for the ‘Uncategorized’ Category

Roger’s Rules of Compression by Roger Nichols

Thursday, April 19th, 2007

1: Don’t. I would rather spend the time to ride the solo or vocal to get a cleaner sound with no compression artifacts. I also prefer to manually remove pops and sibilance. You can use the volume automation in a DAW to eliminate vocal pops and sibilance problems by drawing a V-shaped notch at the center of the pop or ess. It does not have to be very wide, and it will work better than any automated de-esser or pop filter.

2: For the most transparent compression, use a ratio between 2:1 and 3:1. This will increase the apparent loudness of your vocal, but will not have that annoying pumping sound of badly adjusted compressor settings.

3: Don’t compress more than 4dB. Watch the gain reduction meter on the compressor. Adjust the input gain or threshold level until the reduction reads between 3 and 4dB, no more.

4: Use multiple compressors connected in a series if you need more than 4dB of compression. Set the attack and release settings differently and you will have more compression without sounding like you’re killing the vocalist.

5: Parallel compression works in some circumstances. You have the dry signal and the compressed signal – mix them together to get the sound you want. Make sure you compensate for any delay in the compressor to avoid phasing.

Roger Nichols is a recoding engineer and producer and has won seven Grammy Awards, the 2001 TEC Award, and received 11 Grammy nominations. He is on the Board of Governors for the Miami Chapter of NARAS and lectures at Berklee School of Music, Musicians Institute, Recording Workshop, Full Sail, Vancouver Film School, and University of Miami. Visit www.rogernichols.com.

5 Tips For Stalking, Managing, & Capturing Rogue Sounds With Traps & Baffles by Russ Berger

Thursday, April 19th, 2007

Employing Sound Traps and Baffles is much like hunting.

1. Know your hunting grounds: Before the hunt, know and understand your acoustical environment. Once you bound a space with walls, a floor, and a ceiling, you’ve committed acoustics. The boundaries of your space define the low frequency modal response and set limitations for the ambient decay time. Wonderful programs and countless texts have been written that clearly describe the process for analyzing, predicting, and managing acoustical boundary conditions.

Once you understand your environment you will better know how rogue sounds behave in the space; you can better identify where problems might lie and devise a trap to capture the problem.

2. Put the traps where the beavers are: Place traps to capture rogue sound much like you’d place traps for beavers. Placing beaver traps on the ceiling will do you little good, just like placing acoustical traps where the sound you want to capture doesn’t exist. Beavers pretty much live their lives along the floor plane. But rogue sounds live in the three dimensional world, so successful hunting can be achieved if the traps are placed in proximity to boundaries and intersections.

3. Be sure your passive trap is big enough to capture your game. Lower frequencies require larger and deeper traps to control and manage long wavelength rogue sounds.

4. Know how many you want to trap: Trapping one beaver vs. an entire colony will require different methods. The effective trap absorption efficiency is proportional to the area of coverage.

5. Conceal the trap: A good looking studio always seems to sound a little better. Integrate your traps into the architecture and along with those rogue sounds you’ll catch new clients.

Bonus Tip #6: go to www.RBDG.com – Russ Berger is Owner of Russ Berger Design Group (RBDG), which is a design and consulting firm that combines expertise in acoustics, architecture, and interiors to create technical environments and buildings for recording studios, broadcast facilities, creative production spaces, and home theaters.

Grounding, Shielding, Hums, Buzzes, & Things That Go Zap! In Your Sound System by Neil A. Muncy

Thursday, April 19th, 2007

Noise susceptibility (or the lack thereof) in audio systems is a function of two principal factors: shielding, and the “pin-1 problem.” The endless conversations concerning this matter inevitably involve earth “grounding,” a subject which has been around for so long (200+ years) that it has devolved into a sea of confusion, misinformation, and mythology, even though it is completely dictated by easily understandable basic physics.

Conventional grounding mythology would have one believe that electronic systems of all kinds must be robustly connected to earth ground in order to properly function – audio signal processing systems in particular. The grounding reality is that airplanes, motor vehicles, laptop computers, blasters, etc. seem to work just fine without connections to earth ground. Nevertheless, A/V systems of all kinds are considered exempt.

According to the conventional mythologists, “noise in audio systems must have something to do with grounding, what else could it be?” The bad news is that the short answer to this question would fill up this entire issue many times over. The good news is that on the Professional Sound website www.professional-sound.com, a long list of reference material will be found. In addition, the June 1995 issue of the Journal of the Audio Engineering Society, entitled “Shields and Grounds,” includes seven papers which directly address this matter. Go to www.aes.org, and look up “Special Publications.” It’s available as freeware to anyone for $15 US, less if you’re an AES Member … it may also be downloadable. It won’t take you long to realize that the conventional mythologitsts just might be wrong!

Neil Muncy has been around since the days when recorded sound was analog mono and vacuum tubes ruled the audio landscape. He has been a consultant in the audio field for many years, and can be contacted by email at: nmuncy@allstream.net.

Audio Phasing: Part II by Al Whale

Monday, February 19th, 2007

Comb filtering, which produces a hollow, diffuse, and thin sound, will occur with one microphone receiving the same sound from two sources. A common example of this is shown in image E. If the microphone had been closer, the difference in the direct path and the reflected path would have been greater, thus the reflected path’s reduced level would have had less effect. Also the reflected source volume would have been less if the floor had been carpeted.

Methods of correction:
1. Keep the vocal audio mix low into the monitor.
2. Handhold or place the microphone closer to the singer.

While the monitor helps the singer, as the monitor’s gain is increased, the resulting vocal will be more muffled. Many professionals use in-ear monitors to eliminate this effect. Although not popular with the performers, using music only on the monitors (no vocal) will also minimize comb filtering. Often, the house audio suffers when trying to improve the monitoring for the performers.

This article was prompted after I attended several concerts in which the music was excellent, however the dialogue was difficult to understand. Most of the production crews knew the script so well that they were unaware of the problems. If you asked the audience, they would probably say that they thoroughly enjoyed the music. If you were more specific and asked them about the script, they probably would be unable to answer. The comb effect of excessive use of stage monitoring would mush the dialogue so that the audience (which doesn’t know the words) would be unable to understand them. If the concerts are trying to tell a story, they basically miss the goal and only provide enjoyable music.

Ideas to reduce comb filtering:
· Reduce the number of paths from the same audio source.
· Fewer microphones.
· Reduce the possibility of reflections.
· Reduce the relative amplitude of the additional paths.
· Increase the difference in path lengths, thus the secondary path will have more attenuation.
· Use absorbent material.
· Use directional qualities of the microphones.

The following sites assisted in this article: Calculations of attenuation over distance www.mcsquared.com/dbframe.htm; calculations of distances www.pagetutor.com/trigcalc/trig.html.

Al Whale is a Broadcast Technologist and Assistant Chief Engineer at CHBC-TV. He has also set up and operated sound systems and taught sound in many church settings. Reach him at awhale@chbc.com.

Rich’s Rights To Recording Electric Guitar by Richard Chycki

Friday, January 19th, 2007

I’ve been fortunate to record a number of legendary-status guitar players like Aerosmith’s Joe Perry and Rush’s Alex Lifeson. Watching them work is truly an inspiring and educational opportunity; artists like these have accrued a wealth of real-world experience in manifesting instantly recognizable guitar tones. Being the captor of these tones, I’ll share some tips about recording electric guitars.

Right tools for the job: This is a no-brainer but is a common miss. Select gear and tone that works for the song and put your individuality into it. Want to get the right tone? Listen to it. Really. That means pointing the speaker right at your head, not blowing across your knees while you stand in front of a half-stack. Off-axis settings are brittle and don’t sit well in a mix.

Right mics: While there are a myriad of possibilities for micing an amp, I’ve had great success with a few favourite mics. First is the venerable Shure SM57. I’ve tried the Shure Beta 57 and, while it sounds similar, the polar pattern is so tight that finding the sweet spot in front of the speaker can be quite a mission. Other mics I commonly use include the Sennheiser 421, the Sennheiser 409, and the Earthworks SR30. Special mention goes to the Royer 121 ribbon mic. This workhorse mic sounds amazing for almost any electric guitar purpose from country to metal and the specially designed ribbon element won’t fry from the high SPL of close-micing an amp on 11.

Right place at the right time: Personally, I prefer to record guitars in more of a dead environment, although I’ve been known to track in extremely live environments (Joe Perry’s tiled bathroom for one) for effect. In all situations I have the amp lifted well off the floor to avoid troublesome reflections, and I don’t use anything hollow that could resonate (like a roadcase).

Right phase: For multi-micing, it’s important that the phase relationship between the mics remain consistent. Liberal testing of phase using the console’s phase flip button is a necessity when blending mics. For mics placed at various distances from an amp, comb filtering can result from the phase shift due to the longer time the sound takes to reach the more distant mic. Fortunately, a small company in the Los Angeles, CA area called Little Labs has a device called an IBP (In-Between Phase). It can shift the phase to any degree from 0 to 180 so it’s a simple task of dialing the mics into exact phase.
Happy recording!

Richard Chycki is currently recording a new CD for Rush and has worked with Aerosmith, Mick Jagger, Seal, Pink, and many others in the past. Reach him at info@mixland.ca.

Miking The Snare Drum by Tim Crich

Tuesday, December 19th, 2006

For the best snare drum sound, using a properly tuned and professional drum kit is paramount. Whether the band is Death Metal From Saskatoon or The Polka Pals ‘n’ Gals, the drums will be the backbone of the recording.

Start with a dynamic mic, as it can handle the high transient levels of the snare drum and a solid, stable mic stand. Position the mic off-axis with the rest of the drums to minimize leakage. Aim the mic directly at the point of impact – where the tip of the stick makes contact with the drum. Look down the barrel and line up the placement.

Of course, place the mic where the player can’t accidentally whack it. Expecting a drummer not to hit a poorly placed mic is like asking a record producer not to order sushi; sooner or later, it’s going to happen. It’s your fault if the drummer hits the mic with the drumstick, not his.

For more crack, maybe place a second mic with a different quality, such as a crisper high end, alongside the first. Keep these two mic capsules as close together as possible because two mics on any one source can create phasing issues. Perhaps add a third (switched out-of-phase) mic underneath the drum aimed up at the snares. Get the best sound using mic choice, placement, and level before reaching for the equalizer.

If possible, record the individual snare drum tracks on your digital recorder, and analyze the sound waves. Work on moving the mics around so, when recorded, all the drums are in total phase. Good luck!

Tim Crich is a recording engineer/writer living in Vancouver. His credits include The Rolling Stones, John Lennon, Billy Joel, Bon Jovi, KISS, and lots more. Watch for Tim Crich’s Assistant Engineers Handbook 2nd Edition coming soon. Reach him at tcrich@intergate.ca, www.aehandbook.com.


4056 Dorchester Rd., #202,Niagara Falls, ON
Canada L2E 6M9 Phone: 905-374-8878
FAX: 888-665-1307 mail@nor.com
Web Site Produced by NWC