Tutorial: Avoiding Mistakes Importing into ProTools

Overall, importing audio files and session data into Pro Tools is simple; however, there are many quirks of the Pro Tools DAW which must be understood to prevent files ending up in the wrong place – or even worse, missing for good. Knowing proper operating procedure for importing and moving files around is especially crucial for systems using external hard drives or flash drives.

Important Quick Key Commands for Importing:

Starting a new session: COMMAND + N

Opening a previous session: COMMAND + O

Importing audio into current session: SHIFT + COMMAND + I

Importing session data into current session: SHIFT + OPTION + I

Setting Up the Session:

When creating a new session, what’s most important is ensuring the location, or where on the system the session will be saved, is correct. In the window above, my session, “IMPORTING DEMO,” is currently going to be saved and/or located on my external Seagate hard drive in a folder labeled Studio 11. Always check your location to make sure your session is not saved in a strange, or unwanted folder. Furthermore, when the new session is created, Pro Tools creates a session folder:

Some things to note with the session folder:
1) The “IMPORTING DEMO.ptx” file requires the entire session folder to operate, so if I ever needed to send somebody my session, I would need to send the entire “IMPORTING DEMO” folder, and not just the purple .ptx file.
2) Never, ever rename any item within the session folder. For example, your session will not function what so ever if the Audio Files folder becomes “Audio Filezzz.” Pro Tools will not recognize the modified name, and not be able to read data from the renamed folder!

Importing Audio:

Undoubtedly, every engineer’s worst nightmare is opening a session seeing grayed out regions and this “box from hell:”

The missing files box appears when Pro Tools is unable to locate and read one or more files within the Audio Files folder. If a file is missing, the file most likely was imported incorrectly beforehand.

When importing, the initial location of the file being imported matters. A file originating from the the computer’s downloads will provide an import window like the one below, where the blued “convert” button is used to move Clips in Current File into Clips to Import on the right. Nothing too complicated, right?

However, importing audio must be done very carefully if the file to import is coming from the desktop, an external hard drive, or a flash drive plugged into the computer. In those instances, a box like this will appear, where Pro Tools gives two options: Add or Copy:

This is the most common place where grave mistake of Adding instead of Copying occurs. Copying must be selected to ensure the file is read from the Pro Tools session’s Audio Files folder. This step is easy to miss, since Pro Tools automatically defaults to adding the file(s)!  If a file is added rather than copied, the computer will read data for the imported file at the file’s original source, such as the removable flash drive, and not from the session’s audio files folder. In other words, if I plug in a flash drive and “add” files while importing, all those files will be missing if I ever open the session again without the same flash drive plugged in. Files must always be imported and copied so the computer never reads file data anywhere other than the Audio Files folder. The same concept applies to dragging a file from the desktop into a Pro Tools edit window. Since the file dragged in, and was not properly imported and copied, if the Pro Tools session was ever opened on a different computer (with a different desktop), the file dragged in from the desktop would pop up as missing!

Importing Session Data:

Importing session data allows us utilize any data from a previous session, such as channel settings or routing in the current session. I often import session data to import various templates I keep saved on my desktop. Positively, importing session data is also an area where mistakes cannot occur.

Select File and Import session Data. Once you have selected the purple ptx. session from which session data will will be imported, select the specific tracks you wish to import (highlighted above in blue). I often do not want import any clips or audio files from the a previous session while importing session data, which I can deselect in the track data to import menu:

Now that the imported session data appears in the Pro Tools edit window, one crucial step remains: disk allocation. Similar to copying in audio files while importing audio, disk allocation is essential for permanently integrating the imported session data into the current session. Disk allocation is found in the Setup menu:

Select Disk Allocation. In the new window, hold the shift key to select all the tracks of the current session. While the tracks are still highlighted, click on select folder.

The folder you must select is the Pro Tools session folder for your current session. Select Open, and finally, OK in the lower right corner of the Disk Allocation window. Now, the imported session data is allocated to your current session. Now is always a good time to save!

All in all, saving sessions in the appropriate location, importing audio, and importing session data are procedures with costly mistakes. Double checking all these procedures is a smart habit to practice, especially when working on an unfamiliar system. In reality, today’s music production is more mobile than ever. Any given Pro Tools session may include files coming from the Internet, email, or multiple flash drives being plugged in and out of the computer. Ultimately, there countless instances where a file or data may be introduced into a Pro Tools session incorrectly. Opening sessions with missing files or unallocated session data puts projects on standstill, and undergoing a scavenger hunt for files or data wastes precious time. Avoid the rookie mistakes of adding instead of copying, lazily dragging files into a session, or forgetting the process of disk allocation.

Chris Baylaender

Studio 11

 

 

Digital Over-Processing on Vocals

Essential Protocol to Avoid Over-Processing:

Less is more, and that couldn’t be more truthful when using digital plug ins. Today’s plug in repertoire is practically endless, with several options to choose from in EQ, dynamics, effects, emulation and so on. Despite having limitless options to choose from, the reality is that a warm mix comes from using the least amount of digital processing possible – and correctly. More often than not, excess plug in use takes away the integrity of the audio within a mix. I refer to this common mistake as over-processing. Again, less is more.

Certainly, the most important part of avoiding over-processing is attaining a proper recording at the source. All processes occurring before a signal enters the DAW are crucial, so experimenting with microphones and their position toward the source, preamps, cables, proper gain, and room acoustics cannot be overlooked. Furthermore, if the talent’s performance can be improved, record until an exceptional take is attained. Ultimately, even the best plug ins cannot make up for errors in this part of the recording process.

Additionally, when recording, ensure your DAW’s session is operating at a sample rate of 44.1 kHz in 24 Bit. For music and audio, these are the best settings for attaining a recording with integrity, I assure you. In ProTools, these parameters are set in the first window when starting a new session. When a project is finally finished, export in 44.1kHz and 16 Bit, today’s standard CD playback format. Every so often I will receive files from a client to mix at a higher sample rate or bit depth than 44.1k/ 24 bit. A myth floating around is that recording at a higher sample rate is better since more information will be sampled. While this is true, the audio will actually lose integrity mathematically converting back down to 44.1/ 16 Bit.

As I will cover in plug in usage, every digital procedure in recording, mixing and mastering cannot improve resolution of the source. Everything in the computer operates in binary code. Essentially, what is recorded literally becomes converted into numbers within the DAW. These numbers are fed into any given plug in, and the different numbers come out. A good engineer must always consider the delicacy of a digital signal, in that, the integrity of digital audio can be lost in translation from plugin to plugin. A rule of thumb is to make the computer crunch as little numbers as possible.

Making Efficient Processing Decisions:

Gain

Assuming I am approaching a mix with correctly recorded audio on each channel, I first ensure all audio is properly gained. Overall, the whole mix should have decent headroom. Remember, in a ProTools session, the gain of the clips in the edit window is applied before passing through the channel. Clip gain is significant since most digital plugins work optimally when the input signal has healthy gain. For example, an industry staple I use is the Renaissance Compressor from Waves, a solid dynamic tool. However, the algorithm does not function as well at a low threshold setting. With respect to the Renaissance Compressor, adjusting clip gain will work better than having to duck the threshold. Importantly, like analog gear, digital plug ins also have sweet spots in terms of gain staging.

Applying Plug Ins Carefully

Plug ins and vocals can be tricky – and very susceptible to becoming over-processed. Not only are vocals very dynamic and wide in frequency range, they can also contain offensive resonations due to the microphone or acoustic space used in the recording process. When dealing with vocals in the DAW, critical thinking and listening always must be practiced. Similar to “painting oneself into a corner,” the same goes for mixing vocals. This occurs from not being attentive toward what a vocal needs in a mix, and what each plug in facilitates. Vocal plug ins must be implemented with a plan to avoid over-processing. Moreover, sometimes plugins help one need of the vocal, but undermine other elements as we pay attention to one specific improvement. Particularly, compression or reverb can re introduce mid range frequencies previously scooped out. Overall, applying plugins on vocals can take one step forward and two steps back when a single plugin function distracts us from the sound as a whole.

Most of my ProTools sessions contain vocal channels with reductive EQ, compression, and a de esser as my first plugins, respectively. I consistently try to use them as efficiently as possible, often in corrective methods to fix unwanted sonic characteristics. One thing I’ve learned is if any surgical approach on vocals is executed without utmost accuracy, especially in the initial plugins, over-processing is bound to occur. With each plugin you apply, you really have to nail it on the head. Inaccurate surgical EQ is never beneficial.

Reductive, Surgical Equalization in Depth

With respect to a reductive EQ, which is often my first plug in on a vocal, I usually am notching out a specific, offensive frequency in the upper mids (between 2100Hz and 5000 Hz). I would go as far to claim whistle tones in this range of vocal frequencies are the most detrimental factors responsible for harsh, cold-sounding music in today’s industry. These resonations can be found plaguing Kelly Clarkson to Drake, and in many cases are the reason music becomes uncomfortable to listen to – after enough time at a live venue, or wearing headphones. Please, do not confuse musical brightness or crispiness with vocals that are, in fact, strident and piercing! As a result, if I hear an offensive whistle tone resonation that consistently pokes through a vocal recording, I prefer to surgically cut it out, or notch the frequency first. Remember, notching can hurt the integrity of a vocal recording if not executed accurately, creating a “phasey sound.” In fact, for avoiding over-processing later in the signal flow, there is no margin for error in initial EQ notches – doing it wrong will come back to bite you later in the mix. Positively, the target frequency must be crystal clear and gone after reducing the surgical EQ’s gain. Importantly, when notching, experiment with the narrowness (and wideness) of the EQ band. Again, the subtracted  frequency must be nailed to a T – an offensive resonation may seem properly removed at 4000 Hz, but even more effectively taken care of upon setting the EQ to 4100 Hz – a very slight, but imperative adjustment. Use your ears here! Ultimately, a proper surgical EQ cut will remove an unwanted frequency for good, and not uncover additional, offensive frequencies into the signal. Excess surgical EQ is practically synonymous with over-processing; surgical notches, if needed, usually should not occur more than three to four instances in a mix.

Auxiliary Busses and EQ

Remember, a key pillar to avoid over-processing is organizing plugins so the computer does not have to work as hard. A helpful strategy to limit number crunching is to send all vocal channels to a single bus for further EQ, compression, de essing, or effects processing. Often when mixing choruses containing stacked vocal recordings, I will send all channels to one stereo bus, where I tend to cut any mid range build up, as well as boost musical frequencies of the vocal. Applying these additional boosts and cuts to each individual channel would simply require too much Digital Signal Processing. The bus is a great tool for keeping processing efficient and CPU lightweight.

Particularly, EQ(s) on my aux busses for vocals often apply a high pass filter, one or two scoops to address offensive mid range build up (usually between 200 and 550 Hz), as well as a high shelf for presence. Before scooping out any mid range, simply reducing the bus volume is worth testing!  Applying a high shelf on the bus also must be done carefully, as to avoid boosting harsh frequencies in the upper mids where the ugly whistle tones thrive. Furthermore, I am careful to not set the shelf gain too high. I also include an additional scoop in the upper mids around 2600 Hz, if necessary, to reduce harshness in the vocal.

In conclusion, less is more, still, and always. I encourage using any surgical EQ on individual channels when mixing vocals. Vocals then may be sent to a bussed EQ to ease the number crunching on your machine. All in all, do not approach EQ nonchalantly or inattentively. Make sure the EQs are neat and orderly. Confirm your EQ(s) and all plug ins are improving the vocal signal without taking one step forward and two back. Using digital tools efficiently is key for a warm mix in the box. If you find yourself applying excess EQ and plug ins, on the verge of over-processing, start over. There likely is a better way.

Chris Baylaender

Studio 11

 

Constructing a Good Mix: The Pyramid Concept

Step One: Seeing Sound

From early on in my musical career, I have visualized mixes as sonic paintings. Arguably, “seeing the sound” is as instantaneous as listening: right away, our imagination translates what is heard into some sort of visual representation. As a critical listener, I notice my brain perceives some instruments very literally. For example, when I analyze percussion within a mix, such as high hats, my visual imagination automatically responds by “painting” an actual high hat, or a snare – or tom. For other sounds such as vocals, what I visualize while listening can be very abstract, and sometimes impossible to describe beyond “energetic shapes of frequencies.” Ultimately, any critical listener’s imagined sonic painting will be different; however, as a mix engineer, getting lost within a sonic painting is not an option. There is a right way to build, deconstruct, and holistically analyze a sonic painting. In the act of mixing, the engineer, more accurately, is sculpting a mix rather than painting one. I believe the shape of this imaginary sculpture of sound is best described by a pyramid. In light of “seeing the sound” technically and professionally, sculpting the “sonic pyramid” is one of the best philosophies I have ever put into practice – for making mix decisions on individual instruments (the pyramid steps leading to the top), and the mix as a whole (the pyramid altogether).

The Pyramid Position and the Studio Monitors

Picture an equilateral triangle of sound in front of both the left and right studio monitors (and possibly a subwoofer underneath, if you have one). The left and right studio monitors are half way between the top and bottom of the imaginary triangle, and below this triangle is your subwoofer. In turn, the triangle is widest toward its base, where the subwoofer is.  Above the left and right monitors, the triangle reaches finally comes to its peak. So now we have a triangle positioned with respect to the speakers – stay with me here!

Frequencies within the Pyramid: Where they Go and How Loud they Should Be

Audible frequencies range from 20 Hertz to 20,000 Hertz. Essentially, the golden rule of the sound pyramid is that low frequencies make up the bottom and are loudest, while high frequencies belong at the top and are lowest in volume. Theoretically, the peak of the pyramid is 20,000 Hertz, and the pyramid base is 20 Hertz. In turn, as the sonic pyramid ascends from bottom to top, frequencies become higher, while volume must decrease. As a result, 500Hz should be slightly louder than 1000 Hz in a mix, and 1000 Hz should be louder than 4000 Hz, and so on. In another example, a high hat made up of high frequencies should not be louder than the snare drum, made up of mid range frequency!

Above: The PAZ Analyzer from Waves applied to the master channel of a good mix reflects a downward frequency spectrum: volume gradually decreases as frequency increases.

 

Sculpting the Pyramid:

I hear lots of poorly mixed music from the internet where, frankly, the sonic pyramid is nowhere near existent: beats have piercing high hats as loud as the bass drums, or the vocal is extremely loud and stepping over the mix. In reality, once the pyramid is visualized, it becomes an easy mental strategy to use with tools such as EQ. The great thing about constructing the mix with the pyramid is the way in which relationships between instruments become conceptualized, since each frequency range is occupying an exact position within the pyramid. With this in mind, you begin to EQ, and compress soloed instruments, but still make decisions with the mix as a whole in mind. See the sound – and the precise geometry of each frequency’s pocket in the mix: the kick is louder and near the bottom of the sonic pyramid you see; the snare is less intense, near the middle, going up the pyramid. Moving further up the frequency spectrum, the same goes for snares and hi hats: snares should be louder than high hats containing higher frequencies, and below them in the pyramid as a result. If two sounds share a similar frequency range, or pocket in the pyramid, as snares and vocals sometimes do, adjust your faders so they are equally intense, but never fighting for frequency content. Overall, for each instrument, consider its most musical frequency and pocket it into your pyramid. Adjust the instrument(s) of the pyramid pockets with an equalizer, and compress instruments interfering with adjacent pockets higher up in the pyramid. For example, if your mix contains a bass guitar and piano, your piano should not contain lower frequencies interfering with the bass’ space in the pyramid. The piano belongs in the mids and its low end content may need be be removed with EQ, or controlled with compression.

All in all, next time you hear a mix from a great engineer, where all instruments are present, rich, and not fighting for space, observe the pyramid scheme at work. Once you understand the pyramid scheme, it should be impossible to see the sound of a mix any other way in front of studio monitors, or any speaker for that matter. As abstract as your sonic vision may be, never will you ever “see” a kick drum on top of a high hat.

Chris Baylaender

Studio 11

Philosophy of the Recording Engineer

After years as a performing musician, producer, studio client, intern, assistant engineer, and engineer, I’d like to share my two cents on the purpose of engineer, and what constitutes a great engineer.

I can distinctly recall a shocking moment during my very first internship in a recording studio. I was cleaning up a session from the previous night where a client spilled all sorts of drinks on a hardwood floor. The surface was so sticky that my shoes were difficult to lift. I began to mop up the mess, and before I knew it, the senior engineer was yelling at me in an uproar: “You’re doing it wrong, mop with the grain of the wood!” At the time, I legitimately thought the guy was psycho for becoming so angry over how I was mopping. I couldn’t have been more wrong. I was mopping against the grain of the wood, and therefore, inefficiently. The lesson I learned is if something can be done better, do it. As an engineer, the right way is always the most efficient way, and that is why I deserved a tongue lashing, even for what seemed to be minor error. Even the most experienced engineers strive to improve their methods, and this goes beyond the technicalities of recording, mixing, and mastering – or mopping a studio floor. Maintaining a mindset of improvement is a way of life – extending to any circumstances an engineer may experience on a daily basis.

At face value, the studio’s engineer operates all machines of the production process. The engineer is an expert on how they work and putting them to work, individually and collectively. The engineer ensures the final product, the record, possesses optimal sound quality for applications in the industry of consumers. While all of this is true, I find the formal description of the studio engineer as incomplete. The formal description is missing the philosophical ramifications of an engineer’s purpose. In particular, the role of the studio aside from completing a project, and the timeless quality a great engineer can instill in a record.

One truth to working in a recording studio is you never know who or what is going to attend a session. Clients come from all walks of life, and consequently, with all varieties of music. Working at Studio 11, I have serviced clients speaking foreign languages, clients dressed in sparkling costumes, celebrities, gangbangers, clients who are engineers like me, clients who have made me pray with them before the session starts, professional musicians, the musically inexperienced, and even a blind client – just to name a few examples. Being a recording engineer is a task demanding social skills, plain and simple. Any given person must find comfort in the studio – with the engineer as the host.

At the end of the day, the engineer must host a meaningful, musical experience for all clients on the schedule. I believe an ideal studio session entails the client leaving with a quality record, and also as a better creator than when he first walked in the studio. The recording studio is a musical instrument. While the engineer is the expert at playing this musical instrument, involving the client must be encouraged. The mood, vibe, and communication of a studio session leave an imprint on any client’s satisfaction with a record. Ultimately, I believe a comfortable, creative atmosphere is necessary for studio sessions, and a good engineer will respect the creative styles of all clients. Input and opinion matters, and it matters how these viewpoints are communicated. In a nutshell, the creative direction of a studio session is decided by the client, not the engineer.

Nevertheless, the engineer directly manipulates the studio, operating its machines, and making it happen. Surely, there are creative elements of a record where influence from the client is usually absent, or trusted in the engineer’s decisions. Overall, the engineer is the most trusted creator of proper sound quality and using the studio efficiently. Moreover, I believe another duty is equally expected: engineers must also do all they can to bless a record with timelessness.

Timelessness in music is a quality possessed by Beethoven to Pink Floyd to Snoop Dogg. The timeless characteristic may be a combination of originality, authenticity, and uniqueness. What I can confidently state is the studio session plays a major role in timelessness happening or not happening. My most effective approach toward giving clients a timeless record is treating the recording as an artifact. It’s not just an mp3 or wav. file upon leaving the studio doors, but also the most accurate document of a performance that will ever exist.

Identifying the grassroots of musical ideas is also important to a mental note on. Such knowledge translates into decisions made by the engineer, consciously or subconsciously. I am hesitant to claim any piece of music is completely original, which is why acknowledging music from the past is crucial for realizing the artistic medium of recording. Knowing what came before is instrumental in building the foundation of a record; from then on, the sky’s the limit regarding creativity. Surely, the sonic foundation for the creative music must be there, though.

Lastly, I must touch back on reality, in that, at times, the engineering process steers away from my utopian description of the effective studio session. Speedbumps and challenges inevitably surface. The job may be hard, but it’s a job we are lucky to have. When things do steer away, the right decision is finding a solution, never resorting to an excuse. It’s what great engineers do.

 

Chris Baylaender
Studio 11