Tips for Singers and Rappers: Maximizing Your Studio Session

Each piece of the recording studio sums up to one big musical instrument -from the microphone, to the control room of computers and machines, to the facilitating engineer, all the way to the comfy couches in the lounge. Also, let’s not forget the most important part of this musical instrument: you, the creator. At Studio 11, providing the service of a studio session to musicians is a creative process we have optimized through technology, communication, and decades of musically-focused innovation. We’ve witnessed countless Chicago locals, as well as some of the world’s most cherished artists, use the studio at its highest potential. Throughout it all, I can assure one thing: the key to maximizing the musical potential of the recording studio is efficiency. If you’re efficient, you can create more. And if you can create more, your art can impact more people.

Below I’ve compiled a list of tips, insight, and common mistakes to avoid for making the best use of a studio session – from setting up an appointment to walking out the door with an amazing record. Some tips may save you 1 minute, and some may save you 30 minutes.

First, choose a studio with expertise in the music you’re creating.

Do your online research! I regularly meet rappers come to who come to Studio 11 after booking time at facilities with engineers who have no idea how to mix rap music. When this happens, we say welcome home.

When booking studio time, get to know the studio staff.

Believe it or not, even in this fast, modern era of screens and images, speaking to an actual human being on the phone is the best way to book studio time, especially if you have any questions. A brief conversation with professional studio staff will help you get adequately prepared, as well as book the perfect amount of studio time. Schedule your session in advance. Introduce yourself when calling, be ready to discuss what you want to work on, and be ready to make a security deposit. Know what exact dates and times work for you and your team before calling. A good first impression on the phone can go a long way in your musical career.

Arrive on point.

Plan your travels to arrive on time, not early or late. Also, while it may seem obvious, make sure you and your team know exactly where the studio is located. I often see artists arrive on time, but then waste time (or even worse, become interrupted during a good recording) by answering the phone to direct a lost team member.

Make sure your instrumentals are ready for the engineer.

After a friendly welcome, beats are the first thing the studio engineer will ask you for. The most efficient way to provide your beats is using a flash drive or hard drive. Literally, you can enter the studio, hand the drive of beats to the engineer, upon which he or she will load the first beat into Pro Tools, and in as short as 4 minutes you’ll be getting set up behind the mic, ready to rock. If your beats are coming from YouTube, email the links of the beats to the studio before your session. 5 minutes on YouTube finding that “J Cole Type Beat” is 5 minutes less for creating. Moreover, when leasing beats from online beatmakers, its crucial to download the purchased beats immediately. Usually, internet beatmakers provide download links to their beats once a transaction has been made. I’ve been in plenty of studio sessions where clients forward these download links to our email without downloading ahead of time, only to find out that the links have expired or contain a wrong instrumental.

Additionally, if you’re a beatmaker bringing in a tracked-out beat, or even an artist coming in with vocal files recorded elsewhere, always double check that your stems sync up, are in Wav. format (44.1kHz, 24 Bit), and that no files are missing or out of order. Do not come in with a ProTools session or project file. Bringing your computer along to a studio session, just in case, is never a bad idea either.

Ask the studio what resources are available if you plan on integrating live instruments, such as electric guitars.

Be prepared. Trust, but always verify.

If you have a demo track for the song you’re working on, leave it on the back burner.

There’s no such thing as using the studio to make a demo that already sounds bad sound good. Sorry, folks. Parting ways with a demo track can be tough, especially if you’ve listened to it constantly. In reality, if you listen to anything long enough, eventually something that sounds bad can appear to sound good. Using a demo track as a guide when recording or mixing also hinders efficiency, since referencing the demo interrupts more important tasks of the studio session. Ultimately, demo tracks can crystallize ideas and stall the creative, imaginative process. Prepare to re-create and reinvent ideas. One of the greatest assets of the professional recording studio is the ability to create with a fresh slate, alongside an objective engineer who has never heard your song before, therefore acting on instinct from years of experience. Trust that the creatives you hired (and researched before your studio session) will meet your vision without ever needing to hear your demo, or anything for that matter. Similarly, when a mechanic starts working on a vehicle, he or she doesn’t need to know what the problems are. Surely, there are instances where demo tracks are helpful; however, if you do have the condition of “demoitus,” rest assured, the cure is simply entering the studio with an open mind.

Prepare to perform.

Make sure your song lyrics are written down, memorized, or well-defined in your creative mind before stepping up to the mic. Rehearsing ahead of time is imperative. A song full of hot punch lines and riffs may seem well-composed written in a notebook or iPhone notes, but quite the opposite when performed out loud (a microphone will reveal even more kinks). You can write endless punchlines down, but you can’t evaluate their rhythm without rehearsing out loud. Extra, little words which aren’t the rhymes or punchlines are what usually throw things off. Written lyrics also don’t account for the need to take a breath! So, rehearse loudly and fully, again and again, until you truly know the best, most consistent way of performing your song.

Perform with passion.

Perform as if its your last chance to ever record. Perform with every part of your body. Use your chest and diaphragm. Say it like you mean it. Get emotional.

The best artists in the world are shockingly passionate when performing in the studio. Hands down, your performance is the most important part of a studio session. An inadequate performance can’t be fixed when mixing, so take your time to execute at your highest potential. If you want to sound like Kendrick Lamar, perform like Kendrick Lamar. There’s no such thing as a Kendrick Lamar button when mixing. Furthermore, if you want feedback from the engineer, or someone else in the room, ask away. By the same token, if someone is giving a distracting opinion, tell him or her to be quiet – or to leave the studio. Dim the booth lights if this makes you feel creative. Have the engineer put autotune or reverb in your headphones if it brings out a better performance. Make the studio your sanctuary when recording – not only for comfort, but also as a means for stepping out of your comfort zone when performing.

Know your Ps and Qs when in the booth.

When tracking vocals, familiarity with studio lingo such as stacking, in-outs, punching in ,or ad-libs allows for a better flow in the recording process. Your engineer will be happy to explain what these aesthetics are, however becoming familiar with them in advance saves time. Think ahead about the building blocks of vocals you want to make up your record. I/e – do you want background vocals in your chorus, or do you prefer to not have them?

A microphone is also a sensitive musical instrument, making mic technique an invaluable thing to consider. Closer proximity to the mic will have a different sound than performing a step back. If you’re performing at a relatively constant volume, and are about to get really loud, simply back up on the mic to avoid distorting and having to record another take. Rapping toward the phone you’re reading lyrics off of, rather then aiming your voice at the mic, never sounds good, either. The same goes for turning your head back and forth too much. The pretty, pristine, high-end microphones found in professional studios will highlight poor performance qualities as much as the good ones. Great mic technique is another trait among the best artists in the world.

Also, a side note for headphones: cover both ears because the mic will pick up and record music from the phones. When all is done, headphones are rested on the music stand and never on top of the microphone.

When your engineer is mixing, provide input at the right moments.

The moment you finish recording vocals is an excellent time to pop a bottle and take shots with friends – only away from the control room where the engineer is working. The quieter an environment you give your engineer, the better your record will sound. By no means am I saying festivities shouldn’t happen in the studio – they absolutely should; however, keep in mind, mixing and mastering is a task requiring lots of focus and tranquility. If you do have specific, imperative requests for your record, that’s totally cool, welcomed, and expected – but first give the engineer a few minutes after recording to do his or her thing. I promise you, your record will not embark in any sort of wrong direction during this short period of studio time. Patience is necessary for efficiency. Certain aspects of a sound or mix may take a few minutes to become fully developed or understood, so making a premature critique of the sound could be distracting to the engineer, and ultimately unnecessary. Of course, feedback during mixing is very important for creativity in the studio; however, we also must be sensitive about when and when not to speak up. The idea is to be involved without helicopter-parenting the record. Yet, I will say, if you happen to hear a word or line that’s mispronounced, such that you will be unable to sleep at night, please speak up immediately to get back in the booth.

When the mix is finished, the engineer will bounce or export the song, giving you a chance to hear the final product start to finish on the glorious studio speakers. If there are changes, or sections you want to specifically listen to, let your engineer know before he or she bounces your song.

Be prepared to receive, store and share your music.

At the end of a studio session, you should expect to receive a final product of your song in the form of an mp3 and 16 bit Wav file. Your engineer will be able to return your material by copying these files onto a flash drive or hard drive, burning a CD, or via email. Keep the files you receive safe and organized. I highly recommend backing up all files you receive from the studio on some sort of hard drive dedicated to storing your music. Utilizing cloud storage such as dropbox or google drive also is wise. Do not use your email as a storage locker for your music because emails disappear and passwords become forgotten. When the time comes to upload the music online, or to shoot a music video, always use the Wav file, which has substantially better audio quality than an mp3. Soundcloud and YouTube will compress the life out of any mp3. In fact, some streaming services strictly require wav file uploads.

Reflect on, and learn from your studio experience.

One of the best parts of engineering is seeing artists improve every time they use the studio. Each studio session, I witness my clients create with increased style and grace. Like any musical instrument, practice makes perfect.

Hopefully, this article was helpful to any creator interested in taking their art to the next level by booking professional studio time. I, too, once was new artist who had never been to a music studio, slightly nervous, with no idea what to expect. If you’re reading and have more questions, please reach out.

Best,

Engineer Chris Baylaender

Lord Jaws and Bad Luck Kid hit Studio 11 Chicago

Chicago artists & boarders Bad Luck Kid (@_badluckkid_) and Lord Jaws (@lord_jaws) have been hard at work here at Studio 11 – throughout 2018 and 2019 with mixer Chris Baylaender. The duo recently came through, and each knocked out a fresh single in a late-night studio session: “Pain and Fear” for BLK and “Nobody” for Jaws. Each of the two artists contrast and compliment one another in style and performance, but ultimately the sound is new-wave hip hop, with a touch of authentic guitar. Each has a method (or lack of) to his musical madness.

For one, Bad Luck Kid regularly comes to the studio remarkably rehearsed, which isn’t surprising for the talented singer, who also has a few rap bars in his back pocket. Capturing BLK’s voice is a Townsend Sphere L22 Condenser mic, positioned accordingly for the upward-projecting vocalist – who sings into the sky, full volume, eyes closed. Passion. The kid is actually lucky if you ask me. Additionally, the “Pain and Fear” single features an original acoustic guitar performance by BLK, also tracked at Studio 11, topped off with crisp, hip-hop drums with the help & finesse of producer-rapper Immanuel OD (@odthatnigga) – a deep voiced, comedic Chicago homie also deserving of respect – recently covered by Elevator Mag .

Lord Jaws is next up to bat, except with this artist, its all about the moment and improvisation behind the mic. Jaws has been pretty influential in the realm of expanding the autotune sound we’ve grown to love. By the same token, we’ve also grown accustomed to tuned vocals, so its great Jaws is brining a new texture to the table. There’s a rock star vibe across the board on all his songs, giving engineer Chris an unlimited approach toward creative mixing, whether the autotune sound is spacey and stuck in a black hole, or in your face – on stage, like a rock star, with fuzzy distortion. There’s no hiding any emotion with this up-and-comer. The music is unfiltered and heavy on the heart, which, is really the only way to go about a proper freestlye.

Go big or go home is the motto here when skaters (how everyone met, many years ago) BLK, Jaws, and engineer Chris cook up in the studio . There’s hard evidence that Chicago skateboarders are coming just as strong in the studio as they are at Grant Park Skate Plaza downtown. Surprisingly, this has become an iconic spot for music videos, where heavy hitters like Young Chop have stopped by to chill. Bangers only, at least if you’re in the camp of BLK, Jaws, and Chris B.

Stay in tune for an official release of both singles in the coming days. For now, I’d encourage the Chicago community to feast on an already-released collab by Lord Jaws, Bad Luck Kid, and OD entitled “Right Now,” also recorded and mixed by Chris in the Studio 11 A room. Without question, the music is only accelerating. It mind as well be 2020 already.

Finishing the Job in Pro Tools: Master Channel Plug Ins and How to Use Them

Mixing in the Box Start to Finish: The Master Channel

The Master Channel is the last channel of the signal path – where all audio is finally summed together and output. This channel is used for monitoring and adjusting the whole mix in its entirety. Final equalization, compression, harmonic enhancement, and even de-essing plug ins can also be integrated into the Master Channel. After these plug-ins are put in place, the final tool to utilize is a limiter, which can be inserted as the last plug-in on the Master Channel chain, or rendered into an audio region using Pro Tools’ Audiosuite. Importantly, sonic errors within a mix should be fixed before touching the Master Channel whatsoever. Correctly using the Master Channel requires an understanding of its utility as the last channel in the signal path, along with programming plug ins  selectively, precisely, and free from error. Arguably, setting the plug-in chain of the Master Channel is the most scientific, and least artistic procedure of a mixdown. Not only is the Master Channel the last channel of a signal path, it also is the last channel before the mix is introduced to the world – and expected to sound good – whether in a car or auditorium, or out of an iPhone with a cracked screen.

Setting Up a Master Fader or Master Auxiliary Channel in Pro Tools

Either a Master Fader or an Aux must be manually created when using Pro Tools. DAWs like Logic Pro X are friendlier to the average music maker, with a Master Channel and other busses preset and ready to go when opening a blank project. The basic procedure when using Pro Tools is to create a Master Fader.

Option 1 – Use a Master Fader

Bouncing to Disk

In the Toolbar, select “Track,” then “New” (An even faster way is to use the key command: Shift + Command + N). Next, in the box that pops up, change mono to stereo, and select Master Fader, found under “Audio Track.” A maroon Master Fader will now appear in the Mix and Edit windows of Pro Tools. Note that manually routing signal is not needed after activating a Master Fader. The Master Fader automatically is receiving the output signals of each channel in the mix, and automatically outputs to “Built-in Output 1-2,” or the stereo out to the speakers. Furthermore, when a Master Fader is in use, a mix may be bounced out of Pro Tools by horizontally highlighting the entire project in the Edit window and Bouncing to Disk (found under the File menu).

Option 2 (Best) – Create a Master Auxiliary Track to Print a Mix on to an Audio Track

My professional recommendation is to create a Master Auxiliary track, rather than a Master Fader. Using a Master Aux is more conducive for printing mixes, or actually recording the entire mix on an audio track within the session. The newly recorded audio region, or print, can be examined as a waveform, rendered using the Audiosuite, and exported out of the DAW. Creating a Master Aux and Print Track requires a manual set up in the signal path routing.

All channels are output to Bus 3-4, which inputs to “MASTER AUX.” A send is placed on the MASTER AUX (Bus 5-6), which arrives on the “PRINT” audio track.

Begin by creating two new tracks: 1) a stereo aux for the Master Aux, where the plug ins will be inserted, and 2) a stereo audio track  – where the mix will be recorded, or printed . The Master Aux’s input should receive all the output signals from the proceeding channels using a designated bus. The Master Aux’s output will be stereo 1-2 (the speakers for monitoring). Finally, on the Master Aux, create a send to the stereo audio track, where the mix will be printed as a region (See Bus 5-6). The signal gets to the stereo audio track using this send. This send’s fader must be set to 0 dB, and the stereo audio track must be muted. To print the mix, arm (record enable) the stereo audio track and record the mix in real time. After recording, or printing, export the newly recorded region by selecting the region and Exporting Clip(s) as Files (found under the clip menu, but simply use the quick command Shift + Command + K). Be sure to name your region!

If you’re in school as an audio student, printing a mix within your session, followed by exporting the printed region (rather than bouncing to disk), will help you stand out to your professor. If you’ve already dropped out of school to sacrifice your life to making music, still do the right thing by printing your mixes.

Monitoring a Mix Using the Master Channel

First and foremost, whether it’s a fader on a plug-in, channel, aux or Master Fader, signal should never clip 0 dB, entering into the red. The rule of thumb is to keep the fader of the Master Aux or Master Fader set to 0 dB, at unity gain. In turn, if a Master Channel is ever clipping, gain must be reduced in the signal path preceding the Master Channel, and never on the Master Channel itself.

An important concept to understand when mixing in the box is that unlike with analog equipment, signal above 0 dB does not exist in the language of the computer. Simply put, the binary computer of the DAW cannot even process a digital signal exceeding 0 dB. Mathematical inconsistencies ultimately occur, which is why digital distortion sounds so terrible compared to analogue hardware, which in some instances create colorful timbres. Moreover, while the digital medium can be unforgiving toward the slightest amount of clipping, anything below 0 dB will maintain sonic integrity. DAWs like Pro Tools are not biased when crunching the numbers of a signal at -20 dB or a signal a -.5 dB, the point is that the signal is not clipping at 0.  Still, many digital plug-ins, including ones used on a Master Channel, may respond best when input with a healthy amount of gain. Ultimately, gain staging in the box should be executed as one would with analog gear, just without ever exceeding 0db. Before any printing or final limiting, also pay attention to the overall level. A Hip Hop mix peaking in the yellow should be just fine, but consider maintaining adequate headroom for genres such as jazz or classical music where dynamics are especially important.

Applying Plug-Ins on the Master Channel

Plug ins such as EQ and Multiband compression can help glue a mix together within the Mastering Chain. Tasteful boosts and enhancement also can come  from proper equalization, along with harmonic plug ins to round out the edges of a mix. Analyzers and Limiters are used to attain appropriate loudness when mixing in the box.

EQ

A gentle boost restoring 440 hertz.

 

Final equalization on the Master Channel may be used to shape and color the mix’s frequency spectrum. Any and all Master EQ should be dialed in wisely – remember, by the time the Master Channel is touched in the mixing process, the mix should already be shaped and colored as effectively as possible. In turn, most of my mixes do not include a Master EQ, since one is not always necessary. When I do apply a Master EQ, I usually apply it as the first plug in on the Master Chain, and approach the EQ as I would approach an EQ in my car: maybe the mix needs just a slight boost in the bass, or slight high shelf for treble, or even a mid range boost or scoop. Boost what sounds good, but never excessively. Moreover, avoid surgical EQ work like narrow notches on the Master Channel. If reductive shaping, or even a simple roll off in the low end, must be applied, experiment with a Linear Phase EQ or an EQ with smooth curves that will not introduce sonic inconsistencies, or a “phasey” sound. With a Master EQ, the Q factor and bandwidth can drastically alter a mix’s timbre with even slight adjustments, so make sure your ears are considering the entire frequency spectrum with every move.

Multiband Compression

The Waves C4 Multiband Compressor

Over the years, multiband compression has become my best friend for gluing a mix together. For Hip Hop and R&B, a multiband compressor is a plug in to regularly try out. Two of my favorites come from Waves: the purple “C4,” which compresses fairly aggressively, and the Waves Linear Phase Multiband Compressor, which is extremely transparent. Logic Pro X also includes a stock “Multipressor,” which contains up to 4 bands. I tend to avoid multiband compressors with more than 4 bands, as they can skew harmonics within a mix. 4 bands is usually sufficient for achieving a glued, cohesive sound.

Regularly, I will apply the Waves C4 after any of my Master EQ. I begin using the default C4 program, with each band’s threshold set back to 0 dB. Most of the time, I do not compress the low-end band at all, although I may apply a touch of make up gain for extra fatness. For the other 3 bands, I adjust the threshold to gently kiss their respective peaks. The attack and release of the Waves C4 default usually don’t need to be changed. Since I am using multiband compression in my Master Chain as a means to gently glue the mix together, I rarely reintroduce more than 3 dB of make up gain to any single band. Remember, the tool is primarily for compressing, not EQ. Overall, handling dynamics with a multiband compressor also polishes the mix, defines the mid range, and livens the pulse. Along with any Master Equalization, beware of the temptation to blindly apply preset settings found on multiband compressors. Starting off using the default setting, with each band’s threshold set at 0 dB, is the best strategy for actually listening to how the sound is changed by the tool.

Harmonic Enhancement and Analogue Emulators:

The Waves Kramer Master Tape

Pleasant mid-range boosts, high-end smoothing, and an overall rounding of edges on harsh frequencies can be achieved using newer plug-ins emulating yesterday’s colorful, gritty tools, such as tubes-based hardware or tape machines. A few gems do in fact exist in the contemporary plug-in repertoire, including the Kramer Master Tape, the J37 Tape, and the Abbey Road Vinyl from Waves. The nostalgia of yesterday is real, and the sound is actually better with the new technology. These plug-ins are a huge factor in giving my mixes a warm, colorful essence – even when using the defaults. Unlike EQ and compression, these particular Waves plug-ins have stellar presets that are safe to test out. Do note, however, almost all of today’s plug-ins emulating old gear has a “noise” parameter, which I’d recommend silencing. If Waves plug ins are not in your collection, a touch of distortion on Logic X’s stock compressor or on Lo-Fi from Pro Tools also does the trick. Applying these plug-ins on the Master Channel often yields a night-and-day difference in the sound and happiness toward the mix among all listening in the control room.

Limiting and Peak Value

As an engineer working with many beat makers, I am regularly surprised regarding the amount of confusion around limiters. In reality, the tool is simple and does not involve much tweaking or experimentation. In a DAW’s list of plug-ins, a limiter designed for a Master Channel will be specified by a title such as “UltraMaximizer,” for limiters from Waves, or the “adaptive limiter,” found as a stock plug-in in Logic Pro X. Without applying a limiter, the uncompressed mix will sound too quiet when played outside of the DAW, or at least too quiet for the new generation – who are bumping the Migos and “passing the aux” as a sharp turn is made in the car. Skrrt!

With this in mind, the most common use of limiter on the Master Channel is bumping a signal as close to unity gain (0 dB) as possible. Therefore, the first parameter to change on the limiter is the “out ceiling,” which should be set to -.1 dB (this ensures the signal will not actually hit 0 when played back on any type of system). Secondly, ensure the limiter is quantizing as the same bit depth of your DAW session, which should be 24 bits (Importantly, any final product for release needs to be exported, bounced or converted down to 16 bit).  The last parameter to set is the threshold, which will determine how much attenuation, or reduction of amplitude, is applied by the limiter. The red, downward-moving meter on the limiter monitors attenuation. Do not attenuate past 6 dB, I assure you.

Limiting in the box involves monitoring the overall peak value and RMS value (Root Mean Square) using an analyzer. I prefer setting the limiter threshold based on the peak value of the mix, more often than the RMS value, especially for Hip-Hop tracks where loud information such as bass is being chopped in and out. Here, the threshold of the limiter should meet the peak of the mix. In other words, if a mix is peaking at – 6dB, the limiter’s threshold should also be set to at least – 6. Furthermore, if a mix peaks at -6, a threshold set to –7 dB will result in 1 dB of attenuation; -8 dB will yield 2 dB of attenuation, and so on. Back off on the threshold if the limiter attenuates more than 6 dB, to avoid a mix that sounds too compressed.

Wave’s PAZ Analyzer and Logic Pro’s Multimeter

 

 

 

 

 

 

 

 

I make use of Waves’ L1 UltraMaximizer and L3 for transparent limiting in my studio sessions. Both tools are great for making my mixes appropriately loud without changing color and timbre. The L1 is great when applied last in the Master Channel plug-in chain, and the L3 has a processing algorithm suitable for Pro Tools’ Audio Suite. Since I print my mixes within my sessions, I choose to apply the L3 to the print using audiosuite, but first I use an analyzer through audiosuite to determine the peak value of the print. Once I note the peak or RMS, I undo the rendering of the audiosuite (Command+Z). The threshold setting on my limiter is usually this peak value minus 6dB, resulting in exactly 6 dB of attenuation. 6dB of attenuation seems to work well on most popular music containing a full frequency spectrum, and I award you this magic number for reading this far. Your ears will likely agree attenuating beyond 6 dB is pushing it too far. At the end of the process I still like to check the overall RMS of the “bumped up” region (which is now much louder). For Hip Hop with bass, and overall RMS of -6.5 dB is in the ballpark of what we are looking for. Certainly, using your ears is important when limiting, so trust them but also verify.

Waves’ L1 compressing in real time.

Experimentation or using presets are also risky moves when using a limiter, and some of the newer limiters on the market, such as multiband limiters or multimaximizers, are wild, CPU-hungry animals I would also avoid. In a nutshell, peak value limiting is a great method for in-the-box mixing. The peak approach is formulaic, logical, and precise when using an analyzer.

Conclusion

The philosophy of the Master Channel is simple and intuitive, but only because it is approached with perfect accuracy. While the Master Channel still can welcome creative decisions, it is a channel for finishing touches, and not taking risks. If there is a place in audio engineering where technique matters most, it is certainly on the Master Channel. With that said, I can recall countless moments watching veteran Chicago engineers make absolute magic happen using the Master Channel, mysteriously, crazily, and unconventionally. All in all, think outside the box, but pay attention, and trust your ears – but also verify.

Chris Baylaender

Studio 11

 

Tutorial: Avoiding Mistakes Importing into ProTools

Overall, importing audio files and session data into Pro Tools is simple; however, there are many quirks of the Pro Tools DAW which must be understood to prevent files ending up in the wrong place – or even worse, missing for good. Checkout loanload.uk for financial help. Knowing proper operating procedure for importing and moving files around is especially crucial for systems using external hard drives or flash drives.

Important Quick Key Commands for Importing:

Starting a new session: COMMAND + N

Opening a previous session: COMMAND + O

Importing audio into current session: SHIFT + COMMAND + I

Importing session data into current session: SHIFT + OPTION + I

Setting Up the Session:

When creating a new session, what’s most important is ensuring the location, or where on the system the session will be saved, is correct. In the window above, my session, “IMPORTING DEMO,” is currently going to be saved and/or located on my external Seagate hard drive in a folder labeled Studio 11. Always check your location to make sure your session is not saved in a strange, or unwanted folder. Furthermore, when the new session is created, Pro Tools creates a session folder:

Some things to note with the session folder:
1) The “IMPORTING DEMO.ptx” file requires the entire session folder to operate, so if I ever needed to send somebody my session, I would need to send the entire “IMPORTING DEMO” folder, and not just the purple .ptx file.
2) Never, ever rename any item within the session folder. For example, your session will not function what so ever if the Audio Files folder becomes “Audio Filezzz.” Pro Tools will not recognize the modified name, and not be able to read data from the renamed folder!

Importing Audio:

Undoubtedly, every engineer’s worst nightmare is opening a session seeing grayed out regions and this “box from hell:”

The missing files box appears when Pro Tools is unable to locate and read one or more files within the Audio Files folder. If a file is missing, the file most likely was imported incorrectly beforehand.

When importing, the initial location of the file being imported matters. A file originating from the the computer’s downloads will provide an import window like the one below, where the blued “convert” button is used to move Clips in Current File into Clips to Import on the right. Nothing too complicated, right?

However, importing audio must be done very carefully if the file to import is coming from the desktop, an external hard drive, or a flash drive plugged into the computer. In those instances, a box like this will appear, where Pro Tools gives two options: Add or Copy:

This is the most common place where grave mistake of Adding instead of Copying occurs. Copying must be selected to ensure the file is read from the Pro Tools session’s Audio Files folder. This step is easy to miss, since Pro Tools automatically defaults to adding the file(s)!  If a file is added rather than copied, the computer will read data for the imported file at the file’s original source, such as the removable flash drive, and not from the session’s audio files folder. In other words, if I plug in a flash drive and “add” files while importing, all those files will be missing if I ever open the session again without the same flash drive plugged in. Files must always be imported and copied so the computer never reads file data anywhere other than the Audio Files folder. The same concept applies to dragging a file from the desktop into a Pro Tools edit window. Since the file dragged in, and was not properly imported and copied, if the Pro Tools session was ever opened on a different computer (with a different desktop), the file dragged in from the desktop would pop up as missing!

Importing Session Data:

Importing session data allows us utilize any data from a previous session, such as channel settings or routing in the current session. I often import session data to import various templates I keep saved on my desktop. Positively, importing session data is also an area where mistakes cannot occur.

Select File and Import session Data. Once you have selected the purple ptx. session from which session data will will be imported, select the specific tracks you wish to import (highlighted above in blue). I often do not want import any clips or audio files from the a previous session while importing session data, which I can deselect in the track data to import menu:

Now that the imported session data appears in the Pro Tools edit window, one crucial step remains: disk allocation. Similar to copying in audio files while importing audio, disk allocation is essential for permanently integrating the imported session data into the current session. Disk allocation is found in the Setup menu:

Select Disk Allocation. In the new window, hold the shift key to select all the tracks of the current session. While the tracks are still highlighted, click on select folder.

The folder you must select is the Pro Tools session folder for your current session. Select Open, and finally, OK in the lower right corner of the Disk Allocation window. Now, the imported session data is allocated to your current session. Now is always a good time to save!

All in all, saving sessions in the appropriate location, importing audio, and importing session data are procedures with costly mistakes. Double checking all these procedures is a smart habit to practice, especially when working on an unfamiliar system. In reality, today’s music production is more mobile than ever like in poway toddler classes at mygym.com. Any given Pro Tools session may include files coming from the Internet, email, or multiple flash drives being plugged in and out of the computer. Ultimately, there countless instances where a file or data may be introduced into a Pro Tools session incorrectly. Opening sessions with missing files or unallocated session data puts projects on standstill, and undergoing a scavenger hunt for files or data wastes precious time. Avoid the rookie mistakes of adding instead of copying, lazily dragging files into a session, or forgetting the process of disk allocation.

Chris Baylaender

Studio 11

 

 

Digital Over-Processing on Vocals

Essential Protocol to Avoid Over-Processing:

Less is more, and that couldn’t be more truthful when using digital plug ins. Today’s plug in repertoire is practically endless, with several options to choose from in EQ, dynamics, effects, emulation and so on. Despite having limitless options to choose from, the reality is that a warm mix comes from using the least amount of digital processing possible – and correctly. More often than not, excess plug in use takes away the integrity of the audio within a mix. I refer to this common mistake as over-processing. Again, less is more.

Certainly, the most important part of avoiding over-processing is attaining a proper recording at the source. All processes occurring before a signal enters the DAW are crucial, so experimenting with microphones and their position toward the source, preamps, cables, proper gain, and room acoustics cannot be overlooked. Furthermore, if the talent’s performance can be improved, record until an exceptional take is attained. Ultimately, even the best plug ins cannot make up for errors in this part of the recording process.

Additionally, when recording, ensure your DAW’s session is operating at a sample rate of 44.1 kHz in 24 Bit. For music and audio, these are the best settings for attaining a recording with integrity, I assure you. In ProTools, these parameters are set in the first window when starting a new session. When a project is finally finished, export in 44.1kHz and 16 Bit, today’s standard CD playback format. Every so often I will receive files from a client to mix at a higher sample rate or bit depth than 44.1k/ 24 bit. A myth floating around is that recording at a higher sample rate is better since more information will be sampled. While this is true, the audio will actually lose integrity mathematically converting back down to 44.1/ 16 Bit.

As I will cover in plug in usage, every digital procedure in recording, mixing and mastering cannot improve resolution of the source. Everything in the computer operates in binary code. Essentially, what is recorded literally becomes converted into numbers within the DAW. These numbers are fed into any given plug in, and the different numbers come out. A good engineer must always consider the delicacy of a digital signal, in that, the integrity of digital audio can be lost in translation from plugin to plugin. A rule of thumb is to make the computer crunch as little numbers as possible.

Making Efficient Processing Decisions:

Gain

Assuming I am approaching a mix with correctly recorded audio on each channel, I first ensure all audio is properly gained. Overall, the whole mix should have decent headroom. Remember, in a ProTools session, the gain of the clips in the edit window is applied before passing through the channel. Clip gain is significant since most digital plugins work optimally when the input signal has healthy gain. For example, an industry staple I use is the Renaissance Compressor from Waves, a solid dynamic tool. However, the algorithm does not function as well at a low threshold setting. With respect to the Renaissance Compressor, adjusting clip gain will work better than having to duck the threshold. Importantly, like analog gear, digital plug ins also have sweet spots in terms of gain staging.

Applying Plug Ins Carefully

Plug ins and vocals can be tricky – and very susceptible to becoming over-processed. Not only are vocals very dynamic and wide in frequency range, they can also contain offensive resonations due to the microphone or acoustic space used in the recording process. When dealing with vocals in the DAW, critical thinking and listening always must be practiced. Similar to “painting oneself into a corner,” the same goes for mixing vocals. This occurs from not being attentive toward what a vocal needs in a mix, and what each plug in facilitates. Vocal plug ins must be implemented with a plan to avoid over-processing. Moreover, sometimes plugins help one need of the vocal, but undermine other elements as we pay attention to one specific improvement. Particularly, compression or reverb can re introduce mid range frequencies previously scooped out. Overall, applying plugins on vocals can take one step forward and two steps back when a single plugin function distracts us from the sound as a whole.

Most of my ProTools sessions contain vocal channels with reductive EQ, compression, and a de esser as my first plugins, respectively. I consistently try to use them as efficiently as possible, often in corrective methods to fix unwanted sonic characteristics. One thing I’ve learned is if any surgical approach on vocals is executed without utmost accuracy, especially in the initial plugins, over-processing is bound to occur. With each plugin you apply, you really have to nail it on the head. Inaccurate surgical EQ is never beneficial.

Reductive, Surgical Equalization in Depth

With respect to a reductive EQ, which is often my first plug in on a vocal, I usually am notching out a specific, offensive frequency in the upper mids (between 2100Hz and 5000 Hz). I would go as far to claim whistle tones in this range of vocal frequencies are the most detrimental factors responsible for harsh, cold-sounding music in today’s industry. These resonations can be found plaguing Kelly Clarkson to Drake, and in many cases are the reason music becomes uncomfortable to listen to – after enough time at a live venue, or wearing headphones. Please, do not confuse musical brightness or crispiness with vocals that are, in fact, strident and piercing! As a result, if I hear an offensive whistle tone resonation that consistently pokes through a vocal recording, I prefer to surgically cut it out, or notch the frequency first. Remember, notching can hurt the integrity of a vocal recording if not executed accurately, creating a “phasey sound.” In fact, for avoiding over-processing later in the signal flow, there is no margin for error in initial EQ notches – doing it wrong will come back to bite you later in the mix. Positively, the target frequency must be crystal clear and gone after reducing the surgical EQ’s gain. Importantly, when notching, experiment with the narrowness (and wideness) of the EQ band. Again, the subtracted  frequency must be nailed to a T – an offensive resonation may seem properly removed at 4000 Hz, but even more effectively taken care of upon setting the EQ to 4100 Hz – a very slight, but imperative adjustment. Use your ears here! Ultimately, a proper surgical EQ cut will remove an unwanted frequency for good, and not uncover additional, offensive frequencies into the signal. Excess surgical EQ is practically synonymous with over-processing; surgical notches, if needed, usually should not occur more than three to four instances in a mix.

Auxiliary Busses and EQ

Remember, a key pillar to avoid over-processing is organizing plugins so the computer does not have to work as hard. A helpful strategy to limit number crunching is to send all vocal channels to a single bus for further EQ, compression, de essing, or effects processing. Often when mixing choruses containing stacked vocal recordings, I will send all channels to one stereo bus, where I tend to cut any mid range build up, as well as boost musical frequencies of the vocal. Applying these additional boosts and cuts to each individual channel would simply require too much Digital Signal Processing. The bus is a great tool for keeping processing efficient and CPU lightweight.

Particularly, EQ(s) on my aux busses for vocals often apply a high pass filter, one or two scoops to address offensive mid range build up (usually between 200 and 550 Hz), as well as a high shelf for presence. Before scooping out any mid range, simply reducing the bus volume is worth testing!  Applying a high shelf on the bus also must be done carefully, as to avoid boosting harsh frequencies in the upper mids where the ugly whistle tones thrive. Furthermore, I am careful to not set the shelf gain too high. I also include an additional scoop in the upper mids around 2600 Hz, if necessary, to reduce harshness in the vocal.

In conclusion, less is more, still, and always. I encourage using any surgical EQ on individual channels when mixing vocals. Vocals then may be sent to a bussed EQ to ease the number crunching on your machine. All in all, do not approach EQ nonchalantly or inattentively. Make sure the EQs are neat and orderly. Confirm your EQ(s) and all plug ins are improving the vocal signal without taking one step forward and two back. Using digital tools efficiently is key for a warm mix in the box. If you find yourself applying excess EQ and plug ins, on the verge of over-processing, start over. There likely is a better way.

Chris Baylaender

Studio 11

 

Constructing a Good Mix: The Pyramid Concept

Step One: Seeing Sound

From early on in my musical career, I have visualized mixes as sonic paintings. Arguably, “seeing the sound” is as instantaneous as listening: right away, our imagination translates what is heard into some sort of visual representation. As a critical listener, I notice my brain perceives some instruments very literally. For example, when I analyze percussion within a mix, such as high hats, my visual imagination automatically responds by “painting” an actual high hat, or a snare – or tom. For other sounds such as vocals, what I visualize while listening can be very abstract, and sometimes impossible to describe beyond “energetic shapes of frequencies.” Ultimately, any critical listener’s imagined sonic painting will be different; however, as a mix engineer, getting lost within a sonic painting is not an option. There is a right way to build, deconstruct, and holistically analyze a sonic painting. In the act of mixing, the engineer, more accurately, is sculpting a mix rather than painting one. I believe the shape of this imaginary sculpture of sound is best described by a pyramid. In light of “seeing the sound” technically and professionally, sculpting the “sonic pyramid” is one of the best philosophies I have ever put into practice – for making mix decisions on individual instruments (the pyramid steps leading to the top), and the mix as a whole (the pyramid altogether).

The Pyramid Position and the Studio Monitors

Picture an equilateral triangle of sound in front of both the left and right studio monitors (and possibly a subwoofer underneath, if you have one). The left and right studio monitors are half way between the top and bottom of the imaginary triangle, and below this triangle is your subwoofer. In turn, the triangle is widest toward its base, where the subwoofer is.  Above the left and right monitors, the triangle reaches finally comes to its peak. So now we have a triangle positioned with respect to the speakers – stay with me here!

Frequencies within the Pyramid: Where they Go and How Loud they Should Be

Audible frequencies range from 20 Hertz to 20,000 Hertz. Essentially, the golden rule of the sound pyramid is that low frequencies make up the bottom and are loudest, while high frequencies belong at the top and are lowest in volume. Theoretically, the peak of the pyramid is 20,000 Hertz, and the pyramid base is 20 Hertz. In turn, as the sonic pyramid ascends from bottom to top, frequencies become higher, while volume must decrease. As a result, 500Hz should be slightly louder than 1000 Hz in a mix, and 1000 Hz should be louder than 4000 Hz, and so on. In another example, a high hat made up of high frequencies should not be louder than the snare drum, made up of mid range frequency!

Above: The PAZ Analyzer from Waves applied to the master channel of a good mix reflects a downward frequency spectrum: volume gradually decreases as frequency increases.

 

Sculpting the Pyramid:

I hear lots of poorly mixed music from the internet where, frankly, the sonic pyramid is nowhere near existent: beats have piercing high hats as loud as the bass drums, or the vocal is extremely loud and stepping over the mix. In reality, once the pyramid is visualized, it becomes an easy mental strategy to use with tools such as EQ. The great thing about constructing the mix with the pyramid is the way in which relationships between instruments become conceptualized, since each frequency range is occupying an exact position within the pyramid. With this in mind, you begin to EQ, and compress soloed instruments, but still make decisions with the mix as a whole in mind. See the sound – and the precise geometry of each frequency’s pocket in the mix: the kick is louder and near the bottom of the sonic pyramid you see; the snare is less intense, near the middle, going up the pyramid. Moving further up the frequency spectrum, the same goes for snares and hi hats: snares should be louder than high hats containing higher frequencies, and below them in the pyramid as a result. If two sounds share a similar frequency range, or pocket in the pyramid, as snares and vocals sometimes do, adjust your faders so they are equally intense, but never fighting for frequency content. Overall, for each instrument, consider its most musical frequency and pocket it into your pyramid. Adjust the instrument(s) of the pyramid pockets with an equalizer, and compress instruments interfering with adjacent pockets higher up in the pyramid. For example, if your mix contains a bass guitar and piano, your piano should not contain lower frequencies interfering with the bass’ space in the pyramid. The piano belongs in the mids and its low end content may need be be removed with EQ, or controlled with compression.

All in all, next time you hear a mix from a great engineer, where all instruments are present, rich, and not fighting for space, observe the pyramid scheme at work. Once you understand the pyramid scheme, it should be impossible to see the sound of a mix any other way in front of studio monitors, or any speaker for that matter. As abstract as your sonic vision may be, never will you ever “see” a kick drum on top of a high hat.

Chris Baylaender

Studio 11

Philosophy of the Recording Engineer

After years as a performing musician, producer, studio client, intern, assistant engineer, and engineer, I’d like to share my two cents on the purpose of engineer, and what constitutes a great engineer.

I can distinctly recall a shocking moment during my very first internship in a recording studio. I was cleaning up a session from the previous night where a client spilled all sorts of drinks on a hardwood floor. The surface was so sticky that my shoes were difficult to lift. I began to mop up the mess, and before I knew it, the senior engineer was yelling at me in an uproar: “You’re doing it wrong, mop with the grain of the wood!” At the time, I legitimately thought the guy was psycho for becoming so angry over how I was mopping. I couldn’t have been more wrong. I was mopping against the grain of the wood, and therefore, inefficiently. The lesson I learned is if something can be done better, do it. As an engineer, the right way is always the most efficient way, and that is why I deserved a tongue lashing, even for what seemed to be minor error. Even the most experienced engineers strive to improve their methods, and this goes beyond the technicalities of recording, mixing, and mastering – or mopping a studio floor. Maintaining a mindset of improvement is a way of life – extending to any circumstances an engineer may experience on a daily basis.

At face value, the studio’s engineer operates all machines of the production process. The engineer is an expert on how they work and putting them to work, individually and collectively. The engineer ensures the final product, the record, possesses optimal sound quality for applications in the industry of consumers. While all of this is true, I find the formal description of the studio engineer as incomplete. The formal description is missing the philosophical pixie finance of an engineer’s purpose. In particular, the role of the studio aside from completing a project, and the timeless quality a great engineer can instill in a record.

One truth to working in a recording studio is you never know who or what is going to attend a session. Clients come from all walks of life, and consequently, with all varieties of music. Working at Studio 11, I have serviced clients speaking foreign languages, clients dressed in sparkling costumes, celebrities, gangbangers, clients who are engineers like me, clients who have made me pray with them before the session starts, professional musicians, the musically inexperienced, and even a blind client – just to name a few examples. Being a recording engineer is a task demanding social skills, plain and simple. Any given person must find comfort in the studio – with the engineer as the host.

At the end of the day, the engineer must host a meaningful, musical experience for all clients on the schedule. I believe an ideal studio session entails the client leaving with a quality record, and also as a better creator than when he first walked in the studio. The recording studio is a musical instrument. While the engineer is the expert at playing this musical instrument, involving the client must be encouraged. The mood, vibe, and communication of a studio session leave an imprint on any client’s satisfaction with a record. Ultimately, I believe a comfortable, creative atmosphere is necessary for studio sessions, and a good engineer will respect the creative styles of all clients. Input and opinion matters, and it matters how these viewpoints are communicated. In a nutshell, the creative direction of a studio session is decided by the client, not the engineer.

Nevertheless, the engineer directly manipulates the studio, operating its machines, and making it happen. Surely, there are creative elements of a record where influence from the client is usually absent, or trusted in the engineer’s decisions. Overall, the engineer is the most trusted creator of proper sound quality and using the studio efficiently. Moreover, I believe another duty is equally expected: engineers must also do all they can to bless a record with timelessness.

Timelessness in music is a quality possessed by Beethoven to Pink Floyd to Snoop Dogg. The timeless characteristic may be a combination of originality, authenticity, and uniqueness. What I can confidently state is the studio session plays a major role in timelessness happening or not happening. My most effective approach toward giving clients a timeless record is treating the recording as an artifact. It’s not just an mp3 or wav. file upon leaving the studio doors, but also the most accurate document of a performance that will ever exist.

Identifying the grassroots of musical ideas is also important to a mental note on. Such knowledge translates into decisions made by the engineer, consciously or subconsciously. I am hesitant to claim any piece of music is completely original, which is why acknowledging music from the past is crucial for realizing the artistic medium of recording. Knowing what came before is instrumental in building the foundation of a record; from then on, the sky’s the limit regarding creativity. Surely, the sonic foundation for the creative music must be there, though.

Lastly, I must touch back on reality, in that, at times, the engineering process steers away from my utopian description of the effective studio session. Speedbumps and challenges inevitably surface. The job may be hard, but it’s a job we are lucky to have. When things do steer away, the right decision is finding a solution, never resorting to an excuse. It’s what great engineers do.

 

Chris Baylaender
Studio 11