Music is Life
Music Production
Recording Spaces
Mar 08, 2017 12:45 PM
Check out the recording spaces we have available here at Castle Row Studios!
If You Could Only Have One EQ What Would It Be?
Dec 29, 2016 07:04 PM
I’d have to say Alloy 2 by iZotope. I actually have three go-to EQ’s, the other two being Pro-Q2 by fabfilter and EQ8 which is Ableton Live’s stock EQ plug-in. BUT, if I only could have one EQ it would be Alloy 2 and here’s why: it’s much more than an EQ alone, most producers and mixers would probably call it a Swiss Army knife plug-in because not only does it do traditional EQ functions but it also has a transient shaper, exciter, not just one but two compressors which include gating and expansion functions, de-esser, AND a limiter. Whew, that’s a lot! Let’s look a little at each of it’s functions.
So what are the downsides? Well, this is superficial but it’s been quite a while since they’ve updated the GUI, it looks a little outdated. Alloy 2 also lacks any M/S—mid-side—processing which is something I do quite a bit of at my studio. That being said I find the best application for Alloy 2 is mostly for sessions which involve a lot of live tracking meaning recordings of bands or some type of live musician ensemble. Most of these sessions will contain numerous mono tracks because they’re typically sources from single microphones such as a mic on a guitar cab or snare drum; no point in using M/S processing on a source that contains no side information!
- EQ — 8 bands of powerful and customizable EQ types such as analog, vintage, resonant, baxandall, and a slew of filter types.
- Transient Shaper — It’s kinda like a compressor that’s always on. No need to worry about setting all kinds of parameters simply turn up/down the attack and sustain portion of your envelope.
- Exciter — Add harmonic content with it’s 4 types of saturation, all easy to blend from subtle to aggressive.
- Compressors — 2 compressors that feature gating/expansion, wet/dry mix, adjustable detection filter and knee, and 2 modes Peak and RMS
- De-esser — helps reign in “s” sounds, sibilance, high-frequencies.
- Limiter — 2 modes, stereo unlinkable
So what are the downsides? Well, this is superficial but it’s been quite a while since they’ve updated the GUI, it looks a little outdated. Alloy 2 also lacks any M/S—mid-side—processing which is something I do quite a bit of at my studio. That being said I find the best application for Alloy 2 is mostly for sessions which involve a lot of live tracking meaning recordings of bands or some type of live musician ensemble. Most of these sessions will contain numerous mono tracks because they’re typically sources from single microphones such as a mic on a guitar cab or snare drum; no point in using M/S processing on a source that contains no side information!
Completely In the Box!
Jul 19, 2015 06:20 PM
Andrew Scheps, a major mixing engineer, recently proclaimed that he now works “100% in the box” (jump to 35:43 in the video). Wow. For most home and small professional studios that are doing the same this is a major endorsement. What does this exactly mean?
Whether you’re a potential client, a newcomer to music production, or a veteran you might have heard the argument of analog gear versus digital gear. When technology first allowed us to document music as sound waves—not just notation :p—we only had analog gear at our disposal. What this meant was that the medium which recorded the music (or sound) transformed the sound waves into something that is directly analogous, e.g. a 33 ⅓” vinyl record has grooves on it that are physical representations of the actual sound waves. Digital gear on the other hand transforms sound waves into digital information—1’s and 0’s—that are encoded and decoded with particular mathematical algorithms which is a fancy way of saying, “computer program”. If you looked at the grooves in the vinyl record closely you could see the actual sound waves but if you look at the digital files of recorded music you would see bizarre computer language. Analog audio also tends to imprint saturation and coloration that pure digital does not provide. Although this introduced distortion may seem unpleasant it can actually be harnessed in a musical or sonically pleasing manner.
Because of this (and other factors which I won’t go into the technical details) people have come to describe digital audio as “cold” and analog audio “warm”. This was certainly true at the advent of digital recording technology however that was about 40 years ago! Today’s technology has evolved to the point where digital recording gear, programs, and plug-ins can process audio not just with high precision but also can emulate the “warm” analog sound that people have come to love. I’ve seen other well known mixers such as Scheps do mixes using only a MacBook and a Universal Audio satellite (and probably other in-the-box plug-ins).
None of this is to the detriment of analog gear and recording, in fact a hybrid set-up of analog outboard gear used in conjunction with digital gear gives you some incredible options but hopefully if you’re a new musician, producer, or are searching for recording studios to work in you won’t have to worry about whether or not they have analog gear.
Whether you’re a potential client, a newcomer to music production, or a veteran you might have heard the argument of analog gear versus digital gear. When technology first allowed us to document music as sound waves—not just notation :p—we only had analog gear at our disposal. What this meant was that the medium which recorded the music (or sound) transformed the sound waves into something that is directly analogous, e.g. a 33 ⅓” vinyl record has grooves on it that are physical representations of the actual sound waves. Digital gear on the other hand transforms sound waves into digital information—1’s and 0’s—that are encoded and decoded with particular mathematical algorithms which is a fancy way of saying, “computer program”. If you looked at the grooves in the vinyl record closely you could see the actual sound waves but if you look at the digital files of recorded music you would see bizarre computer language. Analog audio also tends to imprint saturation and coloration that pure digital does not provide. Although this introduced distortion may seem unpleasant it can actually be harnessed in a musical or sonically pleasing manner.
Because of this (and other factors which I won’t go into the technical details) people have come to describe digital audio as “cold” and analog audio “warm”. This was certainly true at the advent of digital recording technology however that was about 40 years ago! Today’s technology has evolved to the point where digital recording gear, programs, and plug-ins can process audio not just with high precision but also can emulate the “warm” analog sound that people have come to love. I’ve seen other well known mixers such as Scheps do mixes using only a MacBook and a Universal Audio satellite (and probably other in-the-box plug-ins).
None of this is to the detriment of analog gear and recording, in fact a hybrid set-up of analog outboard gear used in conjunction with digital gear gives you some incredible options but hopefully if you’re a new musician, producer, or are searching for recording studios to work in you won’t have to worry about whether or not they have analog gear.
Analog vs. Digital
Jun 19, 2015 09:55 PM
Frequently I have clients ask me this sort of question: Isn’t analog equipment better than digital? My answer is yes. And no. They’re two different technologies that each have their own unique sets of pro’s and con’s. Lemme es’plain.
Analog recording has the benefit of being ‘continuous’ which basically means that it represents a waveform with just about infinite resolution whereas digital recordings can only approximate the waveform (although it does a good job at fooling your ears that it sounds continuous).
Analog gear does not typically behave in a ‘linear’ fashion, there are attributes which are somewhat random and unpredictable but they sound musical. Digital on the other hand pretty much does exactly what you tell it to do (I’m not talking about plug-ins that emulate analog gear) which is the main reason people say digital sounds ‘cold’. Neither one of these characteristics are value judgements, both can be used effectively depending on the context.
Analog gear requires physical space and a lot of maintenance (both time and $$$) whereas digital gear resides mostly on a hard drive and maintenance can be as simple as downloading an update. Analog gear is very expensive as well, e.g. a Shadow Hills compressor costs about $9,000 but the digital version is about $300.
To make matters more interesting many software developers are creating plug-ins that emulate the behavior of analog equipment, in fact they’re doing it so well that it’s often hard to distinguish between both in what we call ‘A/B’ tests.
In the end it’s best to think of both analog and digital as tools, left in the proper hands much art can be created.
Analog recording has the benefit of being ‘continuous’ which basically means that it represents a waveform with just about infinite resolution whereas digital recordings can only approximate the waveform (although it does a good job at fooling your ears that it sounds continuous).
Analog gear does not typically behave in a ‘linear’ fashion, there are attributes which are somewhat random and unpredictable but they sound musical. Digital on the other hand pretty much does exactly what you tell it to do (I’m not talking about plug-ins that emulate analog gear) which is the main reason people say digital sounds ‘cold’. Neither one of these characteristics are value judgements, both can be used effectively depending on the context.
Analog gear requires physical space and a lot of maintenance (both time and $$$) whereas digital gear resides mostly on a hard drive and maintenance can be as simple as downloading an update. Analog gear is very expensive as well, e.g. a Shadow Hills compressor costs about $9,000 but the digital version is about $300.
To make matters more interesting many software developers are creating plug-ins that emulate the behavior of analog equipment, in fact they’re doing it so well that it’s often hard to distinguish between both in what we call ‘A/B’ tests.
In the end it’s best to think of both analog and digital as tools, left in the proper hands much art can be created.
The Loudness Wars
May 08, 2015 11:34 AM
Even though I’ve been recording, mixing, and mastering music for about 10 years now I was not aware of the “loudness wars” until a few years ago and it wasn’t until last year that I committed myself to researching the issue in depth. I must admit that when someone first introduced me to the concept (a writer mind you, not a musician) my first response was, “What’s wrong with that?” Recordings have come a long way in the last 60 or so years especially in regard to clarity. Almost all of humankind’s endeavors progress over time, we’re always trying to improve (hopefully). I like most of today’s recordings in comparison to older ones: the noise floor is virtually non-existent, elements have much more clarity, and the music hits my car stereo’s 12” subwoofers hard!
Now that I’ve been doing a lot of mixing & mastering work in the past few years I’ve come to appreciate the opposition to the loudness wars. I think that the first problem comes from the label “loudness” itself, it’s a bit of a misnomer; the appropriate term should be “dynamics wars”. Today’s recordings have an extremely narrow window of dynamics—or the difference between the loudest and quietest parts—which makes the music ‘seem’ loud or louder (I frequently see EDM tracks with barely 3 dB dynamic range, shit, that’s loud!). As a musician I can appreciate the negative reactions to this practice. One of the marks of a great musician is their control over dynamics. Why? Because dynamics have direct affects on other musical phenomena such as phrasing, articulation, and blending with other musicians to name a few.
People against the practice of making music increasingly louder and louder claim that it’s destroying the music. In a way it is, it’s destroying the dynamics which are part of the components that make something ‘musical’. It would be like altering the melodies or chords of a song. But here’s where I have to jump off the train: you’ve already altered the music by the mere act of recording it. You know how much crazy stuff we have to do to record a sound so that it sounds as if we were there listening to it? Then there’s all the mixing & mastering that goes along to ‘correct’ the sound. In fact that’s where the term ‘EQ’ comes from, it means equalization because recorded sounds didn’t sound like proper representations of the real-world sound so they needed to be ‘equal’-led out.
Another thing to consider is the genre of music. Yes, Metallica’s Death Magnetic is loud. Way too loud. But the loudness here, to me, comes from the waveforms being pushed too hard rather than the dynamic range between the musicians being squished. Look, it’s metal, it’s supposed to be loud! Every single player is playing balls-to-the-wall loud without much regard to the volume of the other performers. This isn’t an orchestra here where the 2nd violins need to listen to the cello section playing the main theme so they have to play underneath them. No, it’s a metal band being as loud as they all can. Same goes for EDM, where do you think the tracks were meant to be performed? At a club with hundreds of people dancing, not a concert hall where the nosebleed seats need to hear the shading and nuances between flutes and clarinets.
Which leads me to yet another concept: not all genres of music rely on the interplay of dynamics. To force every recorded track to have a large dynamic range would be like forcing everyone to write songs using only a C# Hungarian minor scale. This is what makes the world of music so beautiful and interesting, every style and genre focuses on a set of musical phenomena (scale, key, rhythm, groove, harmony, et al.). Could you imagine if every musician consciously wrote music that utilized every single musical parameter? It would merely be a mental exercise. Dynamics are just one of many parameters than can be manipulated and organized in music.
In summation I think both sides have valid points, there exists a happy compromise somewhere in the middle. Most pop/rock music is generally loud to begin with. Rarely have I seen a live pop performance where the performers were using dynamic manipulation as a main musical element (as opposed to a jazz ensemble or symphony orchestra) so why mix and master an album with that in mind? Then again there is a point where a limited dynamic range is just fatiguing on the ear. This gray area between soft and loud is subjective hence all the fighting over loudness.
Now that I’ve been doing a lot of mixing & mastering work in the past few years I’ve come to appreciate the opposition to the loudness wars. I think that the first problem comes from the label “loudness” itself, it’s a bit of a misnomer; the appropriate term should be “dynamics wars”. Today’s recordings have an extremely narrow window of dynamics—or the difference between the loudest and quietest parts—which makes the music ‘seem’ loud or louder (I frequently see EDM tracks with barely 3 dB dynamic range, shit, that’s loud!). As a musician I can appreciate the negative reactions to this practice. One of the marks of a great musician is their control over dynamics. Why? Because dynamics have direct affects on other musical phenomena such as phrasing, articulation, and blending with other musicians to name a few.
People against the practice of making music increasingly louder and louder claim that it’s destroying the music. In a way it is, it’s destroying the dynamics which are part of the components that make something ‘musical’. It would be like altering the melodies or chords of a song. But here’s where I have to jump off the train: you’ve already altered the music by the mere act of recording it. You know how much crazy stuff we have to do to record a sound so that it sounds as if we were there listening to it? Then there’s all the mixing & mastering that goes along to ‘correct’ the sound. In fact that’s where the term ‘EQ’ comes from, it means equalization because recorded sounds didn’t sound like proper representations of the real-world sound so they needed to be ‘equal’-led out.
Another thing to consider is the genre of music. Yes, Metallica’s Death Magnetic is loud. Way too loud. But the loudness here, to me, comes from the waveforms being pushed too hard rather than the dynamic range between the musicians being squished. Look, it’s metal, it’s supposed to be loud! Every single player is playing balls-to-the-wall loud without much regard to the volume of the other performers. This isn’t an orchestra here where the 2nd violins need to listen to the cello section playing the main theme so they have to play underneath them. No, it’s a metal band being as loud as they all can. Same goes for EDM, where do you think the tracks were meant to be performed? At a club with hundreds of people dancing, not a concert hall where the nosebleed seats need to hear the shading and nuances between flutes and clarinets.
Which leads me to yet another concept: not all genres of music rely on the interplay of dynamics. To force every recorded track to have a large dynamic range would be like forcing everyone to write songs using only a C# Hungarian minor scale. This is what makes the world of music so beautiful and interesting, every style and genre focuses on a set of musical phenomena (scale, key, rhythm, groove, harmony, et al.). Could you imagine if every musician consciously wrote music that utilized every single musical parameter? It would merely be a mental exercise. Dynamics are just one of many parameters than can be manipulated and organized in music.
In summation I think both sides have valid points, there exists a happy compromise somewhere in the middle. Most pop/rock music is generally loud to begin with. Rarely have I seen a live pop performance where the performers were using dynamic manipulation as a main musical element (as opposed to a jazz ensemble or symphony orchestra) so why mix and master an album with that in mind? Then again there is a point where a limited dynamic range is just fatiguing on the ear. This gray area between soft and loud is subjective hence all the fighting over loudness.
Black Dog
May 04, 2015 05:54 PM
I've very recently found out that Black Dog was recorded with the guitar running straight through the board then through a pair of 1176 compressors in series.
Wow.
I remember first hearing that riff when I was about 12 years old. At that time mostly all music I had been exposed to was rap or R&B. There was NO ONE around to introduce me to this stuff. Luckily my cousin had a portable CD player in his closet that had, guess what, Led Zeppelin IV in it. I have no idea why because all he listened to was rap. I listened to that CD over and over.
That was about 20 years ago and I’ve gone on to explore some of the deepest corners and darkest alleyways of music but I’ll never forget that legendary riff. And to think that it was recorded DI! Only goes to show that it’s not just the gear you have but your knowledge of how to use it.
Wow.
I remember first hearing that riff when I was about 12 years old. At that time mostly all music I had been exposed to was rap or R&B. There was NO ONE around to introduce me to this stuff. Luckily my cousin had a portable CD player in his closet that had, guess what, Led Zeppelin IV in it. I have no idea why because all he listened to was rap. I listened to that CD over and over.
That was about 20 years ago and I’ve gone on to explore some of the deepest corners and darkest alleyways of music but I’ll never forget that legendary riff. And to think that it was recorded DI! Only goes to show that it’s not just the gear you have but your knowledge of how to use it.
Gain Staging
Apr 30, 2015 09:36 AM
Doing your own mixes or recording? One of the first lessons I teach my students is about gain staging but what exactly is that? When I first saw the term gain staging I thought it was some incredibly esoteric and nuanced technical factor in the music production practice. The good news though is that it’s actually one of the easiest things to understand (although there are technical aspects to be aware of).
The number one rule of gain staging simple: never go over 0dBFS! When it comes to digital mastering there is a ceiling which we cannot exceed and that ceiling is 0dBFS. If you can keep everything underneath that then you’re already well on your way to making a good mix. When the signal exceeds 0dBFS the resulting sound waves gets “squared off” and is unpleasant (you will also get inter-sample clipping which is completely unacceptable in the mastering world). Remember that this is a technical issue, not a subjective one. If you want nasty distortion or a “squared off” sound wave there are better ways to accomplish that such as employing distortion, saturation, or a down-sampler.
The “staging” part of gain staging refers to every I/O-point (in/out) in your chain. Suppose you’re recording an acoustic guitar and have some effects on the channel, you have a stage right at the input into the DAW. This outputs to your first effect, let’s say it’s an EQ with a hi-pass filter and some corrective EQ. This EQ then outputs to your next effect which might be a compressor. You set your compressor accordingly then it outputs to the channel strip. At no point during this chain should your levels exceed 0dBFS, in fact it’s good practice to leave a small amount of headroom for insurance.
There are a few finer things to keep in mind:
So those are the rules. Of course there are times to break the rules, I would be lying if I said that no mixing or mastering engineer purposely exceeds 0dBFS however these kinds of practices are done for creative purposes (and prudently). In the end everything must succumb to the 0dBFS ceiling. I was re-visiting a project I had started years ago and started cleaning up some bad mixing choices. One channel was clipping badly at several stages in the effect chain so I started ‘fixing’ it trying to make it adhere to the rules. In the end I couldn’t replicate the original sound which was so vital to the track. I found a proper point in the chain and adjusted the gain so that at the final point it was under 0dBFS. Use caution when intentionally clipping your tracks.
The number one rule of gain staging simple: never go over 0dBFS! When it comes to digital mastering there is a ceiling which we cannot exceed and that ceiling is 0dBFS. If you can keep everything underneath that then you’re already well on your way to making a good mix. When the signal exceeds 0dBFS the resulting sound waves gets “squared off” and is unpleasant (you will also get inter-sample clipping which is completely unacceptable in the mastering world). Remember that this is a technical issue, not a subjective one. If you want nasty distortion or a “squared off” sound wave there are better ways to accomplish that such as employing distortion, saturation, or a down-sampler.
The “staging” part of gain staging refers to every I/O-point (in/out) in your chain. Suppose you’re recording an acoustic guitar and have some effects on the channel, you have a stage right at the input into the DAW. This outputs to your first effect, let’s say it’s an EQ with a hi-pass filter and some corrective EQ. This EQ then outputs to your next effect which might be a compressor. You set your compressor accordingly then it outputs to the channel strip. At no point during this chain should your levels exceed 0dBFS, in fact it’s good practice to leave a small amount of headroom for insurance.
There are a few finer things to keep in mind:
- Don’t record at the highest level close to 0dBFS, that used to be a good practice in the analog days but doesn’t work so well in the digital realm for a few reasons. I suggest keeping your recording levels around -6dBFS peak level to give yourself a comfortable headroom space to work with when you start mixing and also to avoid any spurious peaks that may occur. Some people propose even more, don’t get too hung up on the actual numbers, remember that the important thing is having plenty of headroom space.
- Check every I/O point (in/out) and ensure you’re not going over 0dBFS. Often times you’ll have one channel set up with multiple effects; check every stage.
- Find yourself a good meter. I frequently use the meter by Brainworx. Some limiters have good meters built into them such as Universal Audio’s Precision Limiter and Fab Filter’s Pro-Limiter, these both work great and will allow you to bypass the limiter function to solely use the meter. Take it one step further and use a K-14 option keeping your RMS levels around 0dB. I won’t go into the technical details here but you’ll find that your mixes will come out much cleaner with this method.
So those are the rules. Of course there are times to break the rules, I would be lying if I said that no mixing or mastering engineer purposely exceeds 0dBFS however these kinds of practices are done for creative purposes (and prudently). In the end everything must succumb to the 0dBFS ceiling. I was re-visiting a project I had started years ago and started cleaning up some bad mixing choices. One channel was clipping badly at several stages in the effect chain so I started ‘fixing’ it trying to make it adhere to the rules. In the end I couldn’t replicate the original sound which was so vital to the track. I found a proper point in the chain and adjusted the gain so that at the final point it was under 0dBFS. Use caution when intentionally clipping your tracks.
Mastering for iTunes
Apr 28, 2015 09:49 AM
How do artists get their albums on the front page of iTunes? How do you get a "Mastered for iTunes" badge? Have you heard that iTunes music quality sucks compared to CD/Vinyl quality?
Read More...Why record with us?
Apr 01, 2015 08:13 PM
Finger Drumming
Mar 25, 2015 08:27 PM
Sneak peak of my new album! Here’s some finger drumming I’m working on, I was really getting into it today!
Studio Tips
Mar 14, 2015 08:36 PM
Ready to book some studio time? Here’s some advice to help get the most out of your studio session and keep the most in your wallet.
Read More...Mixing vs. Mastering pt.2
Mar 11, 2015 06:50 PM
If my previous article was too long for you (here) then this one should be pretty short and sweet. The mastering engineer is responsible for making the final mix sound as sonically impactful as possible, ensure there are no audio errors, add album information, and prepare the file for final pressing. However they can only do the best job if the mix they have is good to begin with. If you throw them a mix of say, a rock band recorded in a garage with 4 mics and clipped files, they can’t do much with it.
I think a good analogy would be an article you might read in a magazine. An author can write an article about mastering and deliver it to the publisher via a text document. Then the editor physically arranges the text on the page, applies the proper fonts, maybe adds some inline quote boxes, and dresses it with photos and such. Could you imagine if the author delivered the article filled with grammatical mistakes and spelling errors? That’s how mixing and mastering works.
I think a good analogy would be an article you might read in a magazine. An author can write an article about mastering and deliver it to the publisher via a text document. Then the editor physically arranges the text on the page, applies the proper fonts, maybe adds some inline quote boxes, and dresses it with photos and such. Could you imagine if the author delivered the article filled with grammatical mistakes and spelling errors? That’s how mixing and mastering works.
Beethovenboy Productions