header image header image

Sound Advice

Talking Theatre Sound Design with Peter McBoyle

PS Aug17 SoundAdvice PeterMPeter McBoyle is one of Canada’s most successful theatre sound designers and consultants. In his 20-plus years in the industry, McBoyle has designed the sound for countless theatre productions for the Stratford Festival, Charlottetown Festival, National Arts Centre in Ottawa, Troika Entertainment, Ross Petty Productions, Dallas Theatre Centre, and Twyla Tharp’s Broadway musical Come Fly Away, among others. He is also the owner of PM Audio Design and teaches theatre sound at Humber College in Toronto.

PS: You’ve said in the magazine before that on shows such as Stratford’s Sound of Music that you use a complicated delay matrix in order to image things to the stage, dividing the stage into zones and programming it so the delay times all shift to support the microphone in that zone. Can you explain why and when you use this delay matrix technique?

Peter McBoyle: Time of arrival is one of the biggest issues that we deal with on musicals. When the cast and orchestra are contributing acoustic energy into the room, then the intelligibility and fidelity suffers if the time of arrival between the live and reinforced systems isn’t as close as possible. This “source oriented” reinforcement technique works on any stage configuration but is extremely helpful on thrust and in-the-round stages like the Festival and Tom Patterson theatres at the Stratford Festival. The challenge in these configurations is that the relationship between the actor and patron can be very different depending on where the actor is positioned. For example, if an actor is positioned on the extreme stage right side of the stage, then they are close to the audience on the stage right side and far from the audience on the stage left side. In a traditionally delayed system where the front fill speakers are delayed to the centre cluster, the signal from the performer’s RF mic would arrive late to the audience that is close to the performer and early to the audience that is far. So by creating areas on stage that have delay times set for the signals from the speakers to arrive at the correct time for the actor’s position on stage, you can attempt to get the time of arrival closer for all seats in the house.

We use SMAART or SIM to set eight to 10 different sets of delay times for all the speakers. We call this the “delay matrix.” On the Digico SD10T we use at the Festival Theatre, we have a subgroup for each delay zone and the theatre software allows you to set a delay time for each matrix cross-point where the subgroup feeds to the output. In my experience this technique works very well when the reinforcement approach is subtler. It allows the audio system to be more “transparent” than without it because the level of the reinforcement is similar to the level of acoustic energy. As the show gets louder, then the sound system starts to overtake the acoustic level and so it becomes better in those situations to do a more common delay technique of delaying the system to itself so that the time of arrival for a fill speaker is in sync with the centre cluster or the main left and right, whichever is most appropriate.

PS: Can you explain how you design and implement these delay matrixes?

PM: In Stratford, when I first started doing this, I used Meyer Sound’s Matrix3 digital matrix mixers to create this delay matrix. It worked but it was a bit cumbersome and it tied up a lot of DSP to do it. Now we use the delay matrix that is a feature of the SD10T console. Either way, automation routes the RF mics to the correct delay zone cue by cue during the show, which we program based on the actors’ blocking. There are even more sophisticated systems, such as TiMaxTracker from Outboard Electronics, which not only features an automated delay matrix but also features a wireless location system that tracks the location of the performer on stage and automatically moves the mic signal to the correct delay settings as the actor moves around the stage.

PS: Obviously intelligibility is a primary concern in theatre. During your 20-plus years doing this, which techniques or technologies have you discovered that have best-improved intelligibility on the shows you work on?

PM: Certainly having a well-designed, welltuned, and highly cohesive sound system with the appropriate amount of power is the best place to start. Without that it will always be an uphill battle to overcome the deficiencies of the system.

After that I would say the next most important thing is a well-executed mix. Most people don’t realize how active the mix is on a musical. The mix engineer is constantly opening and closing mics and the goal is to only have the mics open when someone is speaking or singing. We don’t just open all the mics for everyone who is on stage and turn them off when they leave the stage as some people think. The interaction between mics would be terrible. With omnidirectional lavs on each performer’s head you would get what I call the “peanut butter cup effect.” Lots of one person’s chocolate in the other person’s peanut butter, except without the great taste. If two people are close to each other on stage, then their voices get in to each other’s mic at similar amplitude but at a different time of arrival. This creates comb filtering in the frequency domain and timing issues in the time domain. So you effectively take a good sounding signal, make it sound bad, and add echo or reverb to it by having multiple open mics. Even when cast members are further away it has a negative effect to have both their mics open. So we strive to get a tight mix and get any unneeded mics closed. This makes a huge difference to intelligibility and also, obviously, to gain before feedback.

Comments are closed.

Contact

4056 Dorchester Rd., #202,Niagara Falls, ON
Canada L2E 6M9 Phone: 905-374-8878
FAX: 888-665-1307 mail@nor.com
Web Site Produced by NWC