header image header image

Sound Advice

Talking Modern Mastering With Jonathan Wyner

Jonathan Wyner

Jonathan Wyner is the president and chief mastering engineer at M Works Mastering Studios in Cambridge, MA and an associate professor of music production and engineering at the Berklee College of Music. He has mastered recordings by Aerosmith, David Bowie, Pink Floyd, Bruce Springsteen, the London Symphony Orchestra, and others.

PS: Why do you think streaming could end the loudness wars?

JW: The distinction I would make is the distinction between owning music that lives locally on a hard drive as opposed to streaming audio. Each one of those models has a different way of experiencing audio level associated with it. When you listen to anything that’s streamed – and streaming in some ways could be connected to the same paradigm as broadcast – the streamer or the broadcaster has an interest in making sure that the user experience is good and so, to the best of their ability, they’re going to adjust level so that everything plays back at a relatively consistent level so that the user is happy listening between records and songs… If you own music on your hard drive, the artist is really interested in making sure their song compares favourably to the one before. That is when people start pushing levels up as high as they can get it and in that paradigm, or the way the audio is played back from the hard drive, everything is normalized to the loudest point in the track, or “peak normalized,” so it less about loudness and more about absolute level.

When we talk about the loudness war as manifest in that paradigm, and whether it was coming from CDs or MP3 files stored on a hard drive, the issue was, “Can we get the RMS level as close to peak as possible?” because peak is the level that established the impression of loudness coming from speakers. So, as we move more and more into streaming and we get more and more loudness-normalized audio, this idea of pushing RMS level up starts to change.

You know, I want to be careful not to say that the loudness wars are over. It isn’t for a number of reasons, but what we do have right now is a mixed paradigm where people are listening in both ways and artists are concerned about both paradigms at once and we have to think about it in mastering. I do ultimately think that we’re starting to see a shift in the final output level of masters that get distributed out into the world. But these kinds of changes take time…

PS: And I would assume that, as a mastering engineer, you’re happy to see that change.

JW: Well, on the one hand, I almost feel as if it’s my job to not express a preference but rather to give the artists what they need. But on the other hand, there are some seriously negative effects that come from pushing a lot of level into limiters and so to move away from that practice, I’m all for it.

The issue of maximizing impact and what some people might interpret as loudness in music extends far beyond using limiters and into making effective arrangements and making good mixes and we can certainly abuse compression tools to make something seem louder coming out of speakers, no matter what the paradigm. So, I don’t want to skirt the issue entirely, so yeah, I am more than happy to see level relax if ultimately it means what people listen to is going to be better. It is going to sound better, there is going to be more detail and nuance and allow certain kinds of dynamic changes to creep back into record production and music production.

PS: Are there other changes caused by the shift from downloaded music to streaming that impact what you do?

Wyner: That’s actually an interesting question. First of all, when you produce files that are going to get turned into MP3s or AAC files for iTunes, we do have to play around with level a little bit in order to end up with better sounding MP3s and ACCs and that usually results in turning the level down a tiny bit. So, just because we’re working in that arena, we have to play around a little bit with level, but not to the extent that we do when we go into loudness normalization.

The second thing, and it’s funny because we’re kind of working at cross purposes here, is that all of the streamed formats – Spotify using Ogg Vorbis or iTunes Radio streaming ACC or what have you – they require that we reduce bandwidth in order to get a good streaming experience out to listeners. In some ways, that will perpetuate lossy audio, at least for a period of time. Some people would argue that lossy audio or these consumer formats that are not quite full fidelity are probably not as good as if they were full fidelity, so we live with that longer because we’re in a streaming paradigm. But in the long run, bandwidth into people’s homes and their devices is increasing and hard disk space is increasing and services like TIDAL, for instance, are streaming full bandwidth audio. So I think it is a matter of time before the requirement, or the thing that drives us to use lossy audio, begins to disappear.

Comments are closed.

Contact

4056 Dorchester Rd., #202,Niagara Falls, ON
Canada L2E 6M9 Phone: 905-374-8878
FAX: 888-665-1307 mail@nor.com
Web Site Produced by NWC