Soundsnap is the most popular site for professional sound effects in the world. You can browse 200,000 sound effects and loops at Soundsnap and read interesting posts about Sound Industry News & Updates, Filmmaking & Post-Production, Post-Audio & Sound Design. The world's greatest online sound library, featuring unlimited sound effects downloads from industry professionals.
This article is a follow-up to the previous article “Rules of Composition for Cinema”. You don’t have to have read it before this article, but it might help further explain some of the concepts if you’re unfamiliar with them. Check it out now or hang onto it for later reading. Your choice!
How can you tell when a person is really angry with you? Is mourning the recent loss of someone important? Is facing a morally difficult dilemma? Often they don’t even need to say a word, you can just see it on their face.
Quick…with just a glance here could you tell what the relationship between these 3 characters is? One of these characters is trying to kill another…gee, I wonder which one’s which?
In the previous article (see Rules of Composition for Cinema) we went over a number of effective uses of cinematography to help tell your story without deferring to clunky exposition. And now we’ll take a hard look at one film, in particular, to see how it excelled in using cinematography to tell you exactly what’s going on with our characters beyond just what is being said.
Some of the concepts we previously went over included how to frame your subject, using depth in a shot, and the all-important Rule of Thirds (which you absolutely should learn to follow…and then break when needed).
When last we spoke on the subject, we looked at the (mis)use of the rule of thirds in Nicolas Winding Refn’s masterpiece Drive (2011). He and cinematographer Newton Thomas Sigel artfully composed each shot to add many delicious layers to the story for you to take in. It’s one of those movies you could probably mute and pretty much still follow along. Let’s pull a few examples of how they did this.
(And since art is subjective, let’s just say that this is all one person’s interpretation of each shot, you might read something very different into it, that is why as a filmmaker, you should remember that not all audiences are going to get the same message from your movie)
When you compare a film’s opening and closing shot side-by-side, you can tell a lot about the journey that happened in between them. In this case both the opening and closing shots of Drive focus on our protagonist, the Driver (Ryan Gosling), and tells us a lot about his world.
The opening shot is a slow, methodical tracking shot. Whenever you see a tracking shot, usually it’s because the director is saying “Hey! Look at this! This is important stuff!”. Notice how these key objects all end up centre-framed as the camera moves? They want you to focus on the stuff right in the middle of frame.
Camera starts on the marked-up road maps of LA (the Driver is planning a route…for what?), it moves to introduce us to our character with signature leather jacket highlighted as he stares out the window while talking to someone on the phone (he remains a shadowy figure for now, but we know he’s a badass with the jacket and there’s a verbal exchange going on), it centres on the basketball game on the TV (planting the seed for a real slick moment in the next scene), stops on his duffel bag (he’s packed and prepped for something going down) which he grabs and exits with while leaving us with a window-framed shot of the sprawling downtown of LA (where he’s about to lead us for the next scene).
This opening shot sets the stage for the movie’s opening chase scene, but also tells us that the Driver is methodical, brooding, a deep-thinker, subtle, cold and alludes to his profession as a getaway driver.
On the flipside, the ending contains just two simple shots of our hero, now having gone through the journey of rescuing his neighbour and love interest, Irene and her son.
He’s off-centre here, offset from the opening shot. Thrown out of balance. Framed left in cold light, he’s hurt (both emotionally and physically) with nothing but darkness behind and in front of him. In this image he’s isolated, alone, mournful and headed into uncertainty.
He’s lost his only friend, killed off lucrative chances of a future and is bleeding out…but he’s done the right thing and he helped someone who needed saving. He started out as a faceless figure in the shadows to being revealed as a pained human being by the end.
And the final image we see the endless, dark road ahead of him (depth!) with his eyes sneakily popping into the rearview mirror, still showing us more humanity than he did in the opening shot (the eyes being the window to the soul and all that).
Just examining these before-and-after shots that bookend the film tells you tons about where this character has been and the transformation that he has undergone.
And by the way, very little is heard in either of these scenes. In fact much of the movie uses thick spaces of silences to create tension between characters. At these points you shouldn’t forget that something still needs to fill that silence. So if you tackle this dialogue-less method I suggest looking into how to build a natural soundscape for your scene.
USE OF QUADRANTS
If you were to take two long strips of black tape and put them in a cross over your screen (don’t actually do that) while watching Drive, you would notice how they effectively framed the subjects of the movie in each scene with these four sections.
When we see our first genuine interaction between the Driver and his neighbour, Irene, they both occupy the same space in each frame. There’s a closeness, an attraction, between the two. In the 2nd shot however, the Driver appears framed in the mirror. He’s in shadow overtop of her child and husband. His presence is a threat to her family.
When the Driver meets our antagonist, Bernie Rose (Albert Brooks), he towers over him menacingly. The Driver appears as a child, staring up to the seemingly giant figure before him. Mind you Bernie has the benefit of standing on the bleacher steps, but it’s the framing here that gives away his power over the Driver.
Once the Driver and Irene get a chance to bond over a day together, they have this moment of prolonged silence as they gaze at each other. Notice the spacing between them? This shot goes on for over a minute and most of it is silence. They have this gap separating them and they seem unable to close it. Something is keeping them from getting too close to one another.
The next time we see the Driver with Bernie they are in business with one another. Bernie has funded the Driver and his new race car to ride for Bernie and make him some money. In Bernie’s frame, all lines lead to him. The girders in the ceiling and the hanging lights all lead into him. He’s the powerful focus again. Nothing intrudes on his territory. In the Driver’s frame he is trapped on the side, his body facing away from Bernie (he doesn’t welcome Bernie’s presence), the car literally closing around him and Bernie even invades in frame right a bit.
Midway through the film we see a brief exchange between the Driver and Irene again. They start off sharing the frame together, but as they talk the camera pushes in on each. The camera pushes each other apart in the depth until they are separated completely. He has a secret that he can’t share with her and it’s driving a wedge between the two of them, cutting him off completely from her.
YOU GET THE PICTURE
Without going into too much spoiler territory, although I’m sure you can fill in the blanks a bit, the story progress from here to its ultimately bittersweet ending. Taking a look at the images that you’ve seen so far, a lot of the story is told in the imagery themselves. On paper the dialogue between the main characters at times appears very minimal, not nearly as expositional as other scripts tend to be. This is because the filmmakers took great care in crafting these frames to show the evolution of each of these characters’ relationships. And you can, too.
When you find yourself looking over your next screenplay, take that red marker out and try to highlight lines that could be told with just a frame. If your viewer muted your movie, would they be able to follow the story still?
Not all video editors are musicians. Most musicians aren’t editors. And that’s okay.
But it doesn’t mean we can’t learn from each other. Music is, and always will be, an extremely important component of any type of video. Whether creating infectious commercials and catchy jingles, tense drama, or emotional underscores, you need to be comfortable editing music for your videos.
Coming from a musical background, I have found that being familiar with music production consistently boosts my creativity and efficiency while editing video. I truly believe that understanding the basics around music will help you not only in your specific craft, but also in your ability to communicate with other collaborators such as composers, directors, or even clients.
Must-know Music Terminology
Here is a quick jumpstart guide to musical terms and ideas that can help you edit.
Equalization is one of the most popular tools for making, mixing, and editing music. EQ tools are designed to manipulate the frequencies of a certain sound. For video editors, EQ is useful to change how certain elements are perceived, such as dialogue, music, and sound effects.
For example, to make a voice sound like it was recorded through a telephone, an EQ tool is used to reduce the high and low frequencies, while boosting the mids. This adjustment is convincing because small phone speakers aren’t capable of producing big bass sounds or super high pitched frequencies.
A large part of our jobs as video editors and filmmakers is to recreate realism in a scene, and EQ is usually a good place to start with audio.
A bar in music is a little less exciting than the one you’re headed to this weekend. But it’s just as dependable.
A musical bar refers to a group of beats. Popular music usually has four beats in a bar. That means if you count along with each beat, you can count: 1, 2, 3 4, then repeat 1, 2, 3, 4. Musically, bars denote segments of time that can be counted. As a video editor, recognizing these segments can help you identify where melodies and rhythms will start to repeat. This offers a great blueprint for figuring out where to cut music tracks.
A downbeat in music is usually the first beat of a bar. It is often accented in some way. Sometimes the drummer will add an extra cymbal.
Downbeats are the easiest place to cut music tracks for your edits. Using your knowledge of the audio waveform, look for kick drums at the start of a bar. Cut right on the downbeat, and then splice in a different section of the song that also starts on a downbeat.
Add a small crossfade and your audience is none the wiser.
The tempo is something everyone can feel when listening to music. It is the rhythm of each beat that forms the tempo, or pace, of a song. Tempo is often referred to as BPM, which stands for beats per minute.
When exploring music to use for your edits, it can be helpful to have a rough BPM in mind. Is your edit a hyped up sizzle reel with electronic music? You can bet that you’ll be using a song with around 128 BPM. Maybe you want a slower piano instrumental, which would most likely be under 90 BPM.
Many music libraries have a ‘sort by BPM’ feature. Use this to help find appropriate music much more efficiently.
Songs are built out of different sections, like the intro, verse, chorus, bridge, and outro. It is a good idea to become familiar with the common characteristics of each section in order to know which parts to use for your edit.
Intro: the beginning of the song. Often softer and slower than the rest.
Verse: the part where the vocalist starts singing lyrics. Verses usually are steady and slowly build in instrumentation and intensity.
Chorus: the refrain of the song, or the part that gets repeated multiple times at different points. This is often the loudest, catchiest, and most impactful part of the song.
Bridge: a section between two other parts of the song. Bridges are often used between a verse and a chorus to get the listener ready for the impact of the chorus.
Outro: after the last chorus, bridge, or verse, the outro is where the song ends. Many bands end on one last note or chord. Placing the outro of a song at the end of your video can add a satisfying closure to your edit.
Audio stems refer to isolated elements of a song or mix. For example, the “guitar stem” would be just the guitar parts of a song, isolated on their own track.
Some music libraries offer stems with their downloads, allowing you much more control over how you arrange music in your edit. You can now choose when to bring in each element of a song, like the beat, melody, and vocals.
Reverb is all around us, at all times. Reverb is essentially the sound of a space. Have you noticed how your voice sounds different in a large empty building than it does in a small space with a bunch of furniture or objects?
This is because the sound waves produced by your vocal cords are reverberating around your environment, changing how you perceive the sound.
Using reverb effects on your audio clips can change how they sound to the audience. For example, if you wanted to take a regular voice and make it sound like it was on stage in a big amphitheater, you’d need to add a lot of reverb!
Delay is the effect of repeating certain regions of audio at a determined rate or pattern. This can be applied creatively for a variety of effects, like dream states or paranormal happenings.
Delay is also a useful tool to simulate PA speakers, megaphones, or to create echoes.
Understanding these concepts and tools is extremely valuable for video editors and filmmakers. Use these ideas and terminology to make creative choices on your edits and communicate clearly with your team.
Jason is currently offering the Soundsnap community a 95% discount on his top-rated online course, The Complete Audio Guide for Video Editors, which includes 4.5 hours of in-depth video tutorials. Clicking the link automatically applies your discount.
Premiere Pro does have so many features right out of the box that can get most jobs done for a lot of video editors. However, there are countless software plugins that make Premiere Pro even more powerful. This article will focus on six of them that cover just about every part of post-production. There’s a plugin for top-notch slow-mos, smooth motion tracking, painless color correction, and more. For each plugin, we’ll give you the quick rundown about it, show you where you can find it, and tell you how much it costs. Let’s get started!
Twixtor is a time remapping plugin. It can create visually-stunning slow motion and fast motion video effects. While Premiere Pro’s internal time remapping tool works great, Twixtor takes this to the next level and then some. It even works on 360 footage.
Twixtor comes in two flavors: Twixtor V7 and Twixtor V7 Pro. Twixtor V7 costs $329.95. Twixtor V7 Pro comes with all of Twixtor V7’s features plus track point guidance, RGB+A tracking, motion vector exporting, and more. Twixtor V7 Pro is $595.
Magic Bullet Looks is one of the top color correction tools for Premiere Pro. It’s 42 tools and 200 presets result in one of the most user-friendly experiences an editor can have while color correcting. Magic Bullet Looks uses real-time color grading with OpenGL and Cuda support so renders aren’t required to view color corrections.
Magic Bullet Looks’ presets are designed to match popular movies and TV shows which you can alter as much as you need to and create your own looks. Add vignettes and grains to your cut to make your own style as well.
You can get a free trial of Magic Bullet Looks. If or when you’re ready to purchase it’s $399.
Mocha Pro 2019 is the one motion tracking plugin to rule them all. It is essentially an app inside an app. Mocha Pro 2019 will open up inside of Premiere Pro for you to do your motion tracking.
Need to blur a logo on a person that’s moving? Want to replace what’s shown on a cell phone in a shot? Mocha Pro 2019 has you covered. Use Mocha Pro 2019 for planar tracking, roto and masking, object removal, object insertion, stabilization, and more. There’s a bit of a learning curve when you’re getting started but can give you some seriously amazing results.
A new license for Mocha Pro 2019 costs $695 or you can get an annual subscription for $295.
Film Impact’s Transition Packs are easy-to-use essential transitions for every Premiere Pro user. Let’s face it — Premiere Pro doesn’t have the best out-of-the-box transitions. Film Impact’s transitions are the perfect plugin to give you that professional look for any transition point.
These transitions include Impact Push, Impact Blur Dissolve, Impact Stretch, Impact Rays, Impact Chroma Leaks, Impact Wipe, Impact Light Leaks, Impact Solarize, Impact 3D Flip, Impact Pop, and Impact Pull. Film Impact Transition Packs are ridiculously simple to install and to get started with.
There are four Transition Packs that range from $59-$99 for individual packs. You can get all four for $287. The Bounce Pack which contains Impact 3D Flip, Impact Pop, and Impact Pull is $85.
RX 7 Standard by iZotope is an audio repair tool plugin for Premiere Pro. You can do things like change the inflection of dialogue with their Dialogue Inflection tool. How many times have you needed something like that, huh? Other features include removing room noise, voice reverb, and breaths in a snap.
RX 7 Standard by iZotope costs $399. iZotope has more expensive products with even more tools if you’re looking for that ultimate audio plugin too.
Another highly-used plugin by Red Giant is their Shooter PluralEyes 4. This plugin syncs audio and video clips in seconds. If you have any type of workflow that requires you to sync audio with video, Shooter PluralEyes 4 is a must. It opens up as another panel inside of Premiere Pro and will shave hours off of importing and organizing new footage and audio.
To use the plugin, import your video and audio files and hit Synchronize. Shooter PluralEyes 4 takes care of the rest and populates your timeline full of synced clips. Shooter PluralEyes 4 costs $299 or you can get the full Shooter Suite that includes Shooter PluralEyes 4, Shooter Offload, Shooter Instant 4K, and Shooter Frames for $399.
No matter what type of video you’re creating, there’s probably a plugin that’ll make it easier to create and give you better-quality effects. These six are just a handful of wonderful Premiere Pro plugins.
Yes, it’s true—video editors can have friends too.
Behold the audio waveform, that squiggly visual representation of what you’re hearing. Learn it, understand it, and most of all, use it!
You’re probably already familiar with waveforms. They can come attached with video clips from built-in camera microphones, separately from dual-system audio, or in the form of music or sound effects.
Once a waveform is imported into your editor of choice, a visual preview of the waveform is usually generated automatically. Audio waveforms can provide a blueprint for editors, both technically and creatively.
But, what is a waveform?
Waveforms are actually graphs that show the amplitude of a recorded signal over time. In other words, waveforms show us when things get really loud, really soft, and everywhere in between for the duration of the clip.
Recognizing common patterns in waveforms can open up new ways to edit. Here are a few of my favorites:
The Kick Drum
The all-important kick drum (or bass drum) marks the tempo of the music, which, in turn, often influences the pace of an edit. Because kick drums are composed of low frequencies, which are visually bigger in a waveform, they are easy to identify without even listening to the music.
Follow along with this song until you hear the kick drum come in. Pay attention to each time there’s a kickdrum, and notice the similar shape it forms in the waveform. Now, pause the music and see if you’re still able to identify where the kick drums are.
Once you can recognize kick drums and other music patterns, you’ll be able to edit much faster and more purposefully. For example, if you knew you wanted to use a section of music with drums, you could skip to the more upbeat part of a track without having to listen through everything else. Inversely, if you want a more subdued section, you could jump to a segment of the song without drums.
Additionally, lining up visual cut points with these audio markers, like kick drums, is an easy way to add rhythm to your edit.
Many types of videos, be it commercial, documentary, or even podcasts and YouTube videos, start with an interview or narration. As an editor, this often means getting handed lengthy clips that take a long time to listen through and select the best parts.
Like with music, the audio waveform of a voice track provides some clues about which parts of the clip might be useful to you.
There will be a big visual difference between the speaking voice and the background noise. Skipping forward in your timeline between the gaps of empty space is an easy time-saver.
If you are working with interview footage, often the interviewee (the subject) has a dedicated microphone while the interviewer isn’t mic’d up. This creates a difference in their audio waveforms. The subject will have a bigger waveform, letting you know visually where the questions are being asked and answered.
Jumping into the world of sound design can be daunting, but it doesn’t have to be. Just like every other type of audio file, sound effects each have their own waveforms. Since sound effects tend to be short in length, it is easier to decipher visually where the important moments are.
For example, a “hit” has a big waveform at the very beginning and then gets smaller over time. A “whoosh” will start small, get bigger until it peaks, and then get smaller again.
Using your knowledge of the waveform, you can line up sound effects to work in concert with music, dialogue, and visuals. I will often line up the start of a hit, or the peak of a whoosh, with a kick drum of a music track. This way, everything sounds more cohesive and impactful.
On your next few projects, start to take note of the waveform. Notice how certain sounds look visually. Appreciate the patterns that are laid out in front of you. And use them to your advantage.
Jason is currently offering the Soundsnap community a 95% discount on his top-rated online course, The Complete Audio Guide for Video Editors, which includes 4.5 hours of in-depth video tutorials. Clicking the link automatically applies your discount
Want to add an extra level of polish and professionalism to your edits? How about increasing your efficiency, leaving you more time to craft the story rather than smoothing out audio?
Luckily for us, Premiere (and other popular programs) comes with many basic audio tools and effects. Learning the nuances of these tools helps avoid a lot of guesswork and trial and error. Let’s get started.
Know Your Crossfades
The crossfade is the most widely used audio transition of all time—not only for filmmakers, but for music producers, DJ’s, and more. A crossfade works by smoothly raising or lowering the volume of audio clips to which it is applied. While this concept is simple, knowing the different types of crossfades can add a level of polish to your edits. In Premiere, there are three types of fades to choose from.
The Constant Power is the default crossfade in Premiere. This means that when you press the keyboard shortcut for an audio crossfade (CMD+SHIFT+D), a Constant Power fade is applied to your selected cut point. (Replace CMD with CTRL on Windows) The Constant Power transition applies a smooth, gradual fade between clips. This effect is the most similar to a cross dissolve in the video.
Constant Gain is more seldom used because it can sometimes sound abrupt. This fade works by decreasing (or increasing) audio at a constant rate. This is a subtle difference from Constant Power, which smooths the rate at which the volume is automated.
I tend to only use the Constant Gain effect when the other fades aren’t working for some reason. There have been several occasions where a crossfade transition sounded a little off, and changing it to a Constant Gain resolved the issue.
The exponential fade is an extremely useful and often overlooked transition. This fade works by starting the volume adjustment slowly and then increasing it faster and faster (exponentially) until it is finished.
Because of its exponential curve, this transition can be used for specific purposes. I will often use an Exponential Fade at the end of my edits because it allows for a shorter fade out without sounding too abrupt. This is also useful when trying to fade out quickly after a music hit, lyric, or measure.
Exponential fades are also great at smoothing out cut-up dialogue. Because of its curve, it allows for a smooth transition before, between, and after individual words and syllables.
BONUS TIP: Hold SHIFT while adjusting the length of crossfade to only alter one side at a time.
Master the Pen Tool
The pen tool (‘P’ on the keyboard) is a versatile tool within Premiere. In the audio department, the pen tool is used to draw automation by hand. The pen tool, while slower to use than preset effects like crossfades, offers the most precision when manipulating audio levels.
The pen tool seems pretty basic but actually has some hidden features. After creating a keyframe (the little dots along the volume line that can be created with the pen tool), holding COMMAND on Mac or CONTROL on Windows and then pressing the keyframe again changes it from a linear curve to a bezier curve.
Bezier curves allow you to manually adjust the rate at which volume is automated. This is achieved by clicking and dragging on either of the blue dotted handles connected to the keyframe.
Using the pen tool for manual adjustments allows for the highest precision audio work, especially useful for smoothing out dialogue levels and ducking music.
Use Bracket Keys to Adjust Volume
The bracket keys [ ] are located above the ENTER key on the right side of your keyboard. When an audio keyframe is selected, tapping the left bracket lowers that keyframe, while the right bracket increases the level. Audio keyframes are those points in the image above that control the volume.
Using the brackets while selecting multiple keyframes is especially useful. By affecting more than one keyframe at once, it raises or lowers whatever section you choose. This can be used, for example, to highlight clips from an interviewee who spoke more softly than his counterpart and raise up the volume just by tapping a key on the keyboard.
Nudge Frame Shortcut
The Nudge Frame Shortcut is CMD+Left/Right. Using this shortcut will move all selected clips by exactly one frame in your chosen direction. For video, this is a good way to make tiny adjustments while searching for the perfect cut point.
Similarly, for audio, nudging clips by a frame can help to find the perfect audio cut point. I often nudge audio clips when making music cuts and trying to line up the cut seamlessly on a beat.
This method of micro adjustments is great for finding the best sounding transition spot. Even without understanding the techniques behind cutting music, nudging a cut point until it “sounds right” is much easier with this shortcut.
Best of all, the nudge frame tool preserves all video and audio transitions between clips.
BONUS TIP: hold SHIFT while nudging clips to move them by five frames at once instead of one.
Audio Time Units
Sometimes when you are nudging clips left or right, searching for that perfect cut point or synchronization, things sound a tiny bit off. You try going one frame to the left, then to the right, but the audio still isn’t perfectly lined up how it should be.
By default, our sequence in Premiere shows us a frame-based timeline. This means that one frame is the smallest unit of time that we can travel. That’s as precise as we can get, meaning sometimes the audio doesn’t get lined up just right.
However, by switching your timeline to “Show Audio Time Units” (see picture below), you can now zoom much further into the waveform. Now you can make more exact adjustments without the previous limitations.
This is made possible because Premiere will now ignore the timing of the video frames and show you a sample-based timeline. After you make your adjustments here, remember to uncheck “Show Audio Units” to return to normal frame-based editing.
Jason is currently offering the Soundsnap community a 95% discount on his top-rated online course, The Complete Audio Guide for Video Editors, which includes 4.5 hours of in-depth video tutorials. Clicking the link automatically applies your discount.
Perhaps you’ve rented a rehearsal space and want record your band. Maybe you’re heading into the wilderness to record booming thunder. You may wish to record a few bars in your home studio, or the roaring engine of a sports car.
What’s the best way to capture stereo sound?
It’s not easy to know. Every sound has its own nuance and character. Recording a classical ensemble is very different from capturing a sax solo. So, which do you choose?
Today’s post is designed to help. It introduces four popular stereo recording techniques. First we’ll read about the four stereo placement methods. You’ll learn which microphones they use, how to set them up, and how they sound, as well as their benefits and drawbacks. Then we’ll learn ways to decide which technique is best for you.
Let’s get started.
X/Y Pair Stereo Recording
The X/Y pair technique is perhaps the most common stereo recording method. Why? It’s simple to set up and get recording quickly. Let’s take a look.
Microphone Selection, Set Up, and Positioning
The X/Y technique aligns two “matched” cardiod microphones at 90 degrees with their capsules at the same place. Of course, it’s impossible to have two capsules at precisely the same position. So, this method typically places the capsules one over the other so they are near the same position without touching.
While a 90 degree angle is most common, it’s possible to change the angle to 120, 130, or 180 degrees to have variations of the same effect. Some also use bi-directional or super-cardioid microphones, although this changes the nature of the sound.
How the X/Y Technique Sounds
The result? A clear and present recording that feels more narrow and focused than real life. Because of this, the X/Y technique doesn’t create spacious recordings with depth or “soundstage”.
The X/Y method is simple. It’s easy to align the microphones using an inexpensive stereo bar or inside a windshield when field recording. Due to the microphone alignment, the X/Y technique is a good choice when mono compatibility is needed, or to avoid phase issues. Because of this, it’s a popular method for recording closely, such as single instruments in a home studio, or for specific sound effects where a clear and stable stereo image is preferred.
Many feel the X/Y method captures recordings that are “too narrow” and lack spaciousness. Also, the sound may lack bass if cardioids are used, and may lose other frequencies when played back in mono. Because of these reasons, the X/Y technique isn’t the best choice for recording subjects at larger distances.
A/B Spaced Pair Stereo Technique
Need spacious recordings? The A/B technique will deliver. As the name suggestions, sound arrives at a widely “spaced pair” of microphones at slightly different times, creating a stereo effect.
Microphone Selection, Set Up, and Positioning
Two omnidirectional microphones are the most popular choice for the A/B technique. They arrange the microphones parallel to each other 40-60cm apart. Wider distances are possible, however the trade off is that sound nearer to the microphones won’t be captured as well.
How the A/B Technique Sounds
A/B stereo recordings produce a wide, spacious image. They accurately represent large spaces, and create a realistic sensation of an environment as a whole.
Why use A/B?
It’s simple to set up and get running quickly. It provides very good frequency representation, even bass tones, which is something that other techniques lack. It’s best used to capture width, breadth, and depth. Recordings of nature ambiences and entire music ensembles shine with the “spaced pair” method.
There is one major problem with A/B recordings: the do no work well in model. When attempted, listeners may hear an unpleasant comb filtering effect. There is less distinct stereo separation with this format. Also, the set up is less mobile than other methods, so it can be a pain if a project requires changing recording locations often.
ORTF Near-Coincident Stereo Technique
The ORTF method was designed to emulate natural human hearing. Pioneered by the French Office de Radiodiffusion Télévision Française (ORTF) at Radio France, this “Side-Other-Side” method adopts the advantages of the X/Y and A/B methods without their drawbacks.
Microphone Selection, Set Up, and Positioning
This technique calls for two cardiod microphones positioned 17 cm apart facing outwards at 110 degrees. The alignment was carefully considered; the angle was designed to mimic the “shadow” of the human head, and the spacing to match the distance between ears.
How the ORTF Technique Sounds
The result of the ORTF method is the creation of a realistic stereo field across the horizontal plane, similar to natural hearing. The cardioid microphones used do reject off-axis sounds, so less of the spaciousness of the area is captured (although wider than X/Y).
Another simple stereo technique, ORTF is a good choice to recreate realistic human hearing. It is a good choice for mono compatibility (although not perfect), and for mobility too.
Because the ORTF method uses cardioid microphones, it may not represent bass as well as techniques using other microphones. Cardioids also reject off-access sounds, so less of the “room” is captured using this method. Some engineers also noice notice a “hole” can be created in the centre of the stereo field.
Mid-Side (M/S) Stereo Technique
In 1954, Holger Lauridsen, head engineer of the R&D department at the Danish State Radio, invented a technique while researching spatial audio. It is called mid-side stereo recording, and is a flexible – although complex – method of recording stereo sound.
Microphone Selection, Set Up, and Positioning
The M/S method uses two different microphones: a cardiod microphone (the “mid” channel) and a figure-of-eight or bi-directional microphone (the “side” channel). The technique positions the cardiod facing forward while the bi-directional records sound to the side. This recording must then be decoded. How? The method takes the bi-directional sound, inverts its phase, and copies it to a third channel. The result is three channels of audio: left (bi-directional), centre (cardioid), and right (phase-inverted bi-directional).
Software or mixing performs this decoding automatically. Or, it can be done in an audio editor manually.
How the M/S Technique Sounds
The result of this complex engineering? A stereo recording with a variable width. Raising or lowering the gain of the sides makes the stereo recording seem wider or narrower. Prefer a mono recording? Simply drop the side channels and use the cardiod channel by itself.
The M/S technique creates a solid middle image, and places elements well within the stereo soundscape.
The M/S technique is prized for its flexibility. Attenuating the side channels allows a single recording to be transformed into a stereo soundscape with multiple widths, both during and after recording. It is also known as a “safe” stereo recording method that avoids phase problems.
It’s a compact, mobile stereo arrangement that uses its strong middle image for capturing instruments in a home studio and specific sound effects, and ambiences with detailed soundstage.
What’s the downside to this method? Well, monitoring mid-side stereo while recording is tricky. It’s difficult to know what is being recorded unless the audio is being decoded at the same time. A similar concept applies when working with M/S files later, too: they must be decoded to be used, and not everyone has software – or the knowledge – to work with mid-side recordings properly.
The Best Stereo Recording Method
So, which stereo recording method is best?
To decide, consider the subject you’re recording. Is it a single guitar? Are you gathering field recordings of a barking dog? Perhaps you’d like to capture something more spacious, such as a symphonic performance or rainfall. It’s vital to consider the breadth of the recording needed as well as the distance to the subject. Close subjects may demand M/S or X/Y recordings. Ambient subjects are well suited to A/B recording.
It’s also important to consider mono compatibility. Will the recording be played back through a single source? Methods that reduce or eliminate phase are the best choice. ORTF works well here, and M/S is unmatched.
Do you need to be mobile? Some microphone placements are easy to pack into a windshield and carry from place to place. Other projects don’t need this and are better arranging microphones once and leaving where they will remain all day. A M/S, ORTF, or X/Y arrangement is favored for mobility.
Not everyone has the time to arrange complex microphone placements or work with unconventional channel layouts. So, ease of use bears consideration, too. If that is the case, M/S recording is best avoided.
Finally – and most importantly – consider the sound you need. Do you mind losing a bit of bass when using cardioid microphones? Prefer to mimic human hearing? Do you need to choose between a tight, focused recording or a spacious, atmospheric sound?
When you consider each of these five tips wisely, you’ll have the tools to find the stereo technique that’s best for you.
Timewarp, Fit To Fill, and Avid Media Composer’s other time manipulation tools pale in comparison to the simplicity of the Trim To Fill Effect. Trim To Fill is hands down the quickest and easiest way to speed up or slow down a clip inside of Avid Media Composer.
For this post we’ll first show you how to use Avid Media Composer’s Trim To Fill Effect. Then we’ll go into some finer details of when and why to use it.
How to use Avid Media Composer’s Trim To Fill Effect
Let’s look at the timeline in the picture below:
In the scenario we’re going to use what we want to do is take the middle clip on V1 and expand it all the way out to fill the gap between it and the next clip. This will slow the time of the clip down.
To do this, first open up the Effect Palette by hitting Command+8 if you’re on a Mac or Control+8 if you’re on a PC. Navigate to the Timewarp folder on the left column and once selected find the Trim To Fill Effect in the right column.
Drag and drop the Trim To Fill Effect onto the clip you want to adjust. In our scenario, this is that middle clip on V1. Now the fun part!
Go into Trim Mode and roll the clip out or in based on if you want to slow the clip down or speed it up. In our scenario, we’ll roll the clip out so it covers the gap in the track which will slow it down from 100% to 62%.
Boom. That’s it. If you need to adjust the trim you made just jump back into Trim Mode and make whatever adjustments you need to make. The Trim To Fill Effect will change the speed of the clip as you trim.
When and Why to Use Trim To Fill versus Other Effects
You’ll want to use Trim To Fill whenever you have an area that’s too large or too small for a given clip. Or a clip whose speed you want to adjust but don’t quite know for how long. You’ll probably find it more comfortable to trim a clip rather than jumping into Timewarp’s Motion Effect Editor.
However, you will find that you can still use the Motion Effect Editor on the clip with Trim To Fill applied. It’s kind of the best of both worlds. And if you want to insure a smoother playback you’ll want to go into here and adjust the Type: dropdown from “Duplicated Field” to “FluidMotion” or one of the other options.
To get to the Motion Effect Editor on a clip with Trim To Fill applied to it, put your Time Position Indicator over the clip, make sure the track is selected, and enter Effect Mode. Or you can enter Effect Mode then select the clip. It’s up to you.
Another tool that Avid Media Composer has that is similar to Trim To Fill is Fit To Fill. For Fit To Fill to work you need to set In and Out Points on a clip in the Source Monitor and In and Out Points on the Timeline then hit the Fit To Fill button.
However, the downside of Fit To Fill is that 1) you can’t adjust it without promoting it to a Timewarp Effect and 2) it creates an unnecessary new Master Clip for you to keep track of inside your project.
That’s it! Trim To Fill is the quickest and easiest way you’ll find to do time manipulation in Avid Media Composer.
To recap, find Trim To Fill in the Timewarp folder in the Effect Palette. Drag and drop it onto your clip. Enter Trim Mode and trim the clip to the desired length to speed up or slow down the clip. Enter back into Trim Mode to adjust as needed. And then move onto your next edit!
Next time you’re watching a movie, try randomly hitting the pause button and observe what you see. Does the way the action is currently framed reflect the story? If the filmmakers are worth their salt, chances are it actually offers up a bit more information than you initially thought
This is how you can communicate far more than what is written for the characters’ dialogue and add delicious layers to your story. How you have chosen to frame the action tells us so much more than words ever could like revealing details of the characters’ relationships, their motivations, and their emotions. And like all filmmaking tools, there are many ways to utilize this one, as well as many ways to unconventionally use it. Let’s fire up some examples!
FRAMING YOUR CHARACTERS
We’re not just talking about the frame of the actual screen, that is actually framing up your shot, deciding between a wide shot, medium, or close-up, or deciding what focal length to use. We’re talking about the placement of your character within the frame along with what is surrounding him/her.
How a character is positioned relative to the rest of their surroundings can say a lot about their state (For example are they feeling trapped? Or free? Can you use the environment to your advantage in delivering this information?)
If you shoot someone standing in a doorway, you’ve literally “framed” them. This may have them appear boxed in, or it could show that they are imposing or blocking the way out for another character. It all depends on how you use it.
In this above shot from Road to Perdition (2002) this is our introduction to a new character. The cinematographer (Conrad C. Hall) has applied a vertigo effect on this shot to make the train tracks warp and stretch eerily as the character approaches camera. This tells us that this new character is possibly twisted, abnormal or treacherous. And the shot doesn’t lie, this is one bad dude coming into the story.
In this shot from Garden State(2004), the main character is experiencing plenty of existential turmoil as he returns home for his estranged mother’s funeral and the town he left behind years ago. This frame of him literally disappearing into the background lets us know that he is lost, both metaphorically and physically.
Are we looking directly at a brick wall? Or are we staring off across a barren landscape to behold a beautiful sunrise?
The amount of depth the shot has, and where you’ve chosen to place your subjects in that depth, can give us information on the path those characters are currently on (eg. the cowboy starts trudging off into the sunset) and the relationship between each of those subjects.
Depth can also help us guide the eye where we want it to go. Does the long stretch of stairs lead us straight to the broken window that our character fears an enemy will jump through after her?
The legendary Gregg Toland who gave us the best case study of cinematography to-date in the form of Citizen Kane (1942), framed these two characters with an uncomfortably large space separating the two characters (Kane, sitting in a chair in the foreground, and his wife Susan sitting in the maw of the ginormous fireplace on the opposite side of the room). This lets you know everything you need to about the state of their marriage at this point.
THE ALL-IMPORTANT “RULE OF THIRDS”
Ah, yes. Here is another one of those pesky “rules” that everyone says you should follow, but also a fun one to break when it calls for it. You’ll see why.
The Rule of Thirds puts forth that every frame be made up into a grid-like structure (see the fancy grid above?). With it serving as a guide, we can use the grid to place our subjects accordingly.
If we’re seeing a person’s close-up reaction, their eyes would be placed in the upper-third of the frame along the topmost horizontal line.
If it’s a person walking across the screen, you place them in the opposite ⅓ of frame from the direction they are walking. Obviously, because we want to see where they are walking.
If it’s a lone subject that we are focusing on, then place it smack dab in the middle of the frame where our eyes naturally start to focus on.
In fact, central framing is often employed for fast-paced sequences because of our tendency to stay focused on the centre of the screen. We can see this used with great success during the frenetic pacing of Mad Max: Fury Road (2015) which was supposedly shot with the cinematographer placing crosshairs on the centre of the frame to line up the subject in most shots. You can see how effective this was by pressing play on the video below and watching the movie sped up 12x (Credit for the video goes to the talented editor Vashi Nedomansky. Check out his other work). Notice how you can still follow the action? That’s the magic of central framing, baby!
Now…that being said, here’s where we get to smash this rule to bits.
These are the conventional typical uses of the Rule of Thirds, but what happens when we go against the grain and place subjects where they wouldn’t be expected to be in the frame.
In the above scene from Drive (2011), we are seeing the meeting of these two characters for the first time. What does their placement in each of these alternating frames tell you?
They both occupy the same space of the frame, even though Ryan Gosling’s character (named Driver) should by the Rule of Thirds be placed in the right third of the frame looking left. Oscar Isaacs plays the husband (named Standard) of Carrie Mulligan’s character (Irene) who has just been released from prison and returned home.
As we cut back and forth during their dialogue they are literally fighting for the same position in the frame, which Standard holds over Irene in the background. We can tell they are at odds with each other before a word is ever spoken.
NOW THAT YOU KNOW THE RULES…
The beauty with each of these filmmaking conventions is the way you can rely on them as a guide to build an aesthetically-pleasing frame that tells your story, whether you use them in a way the audience is expecting or in a way that surprises them by going against the grain.
Some good practice would be to try going around with a camera for a day and see how many ways you can frame up subjects. See how you can build a story around the image by simply framing them within their environment, changing the depth between them and other subjects/their background, and using that Rule of Thirds to place them in the proper area of the frame.
Perhaps this is why it’s still called “photography” when it comes to filmmaking. Each frame can serve to tell a bit of the story, if you build that frame with the story in mind.
Now that we’ve gone over a few of the ways you communicate to your audience with some simple composition techniques, perhaps we could focus on one particular movie and see how well those techniques can be used. Stay tuned for Part 2 of this article where we’ll be doing just that.
At least that’s what I always tried to explain to my high school teachers when they caught me copying and pasting my book reports from Wikipedia. It never worked. But luckily in cinema, it can!
An homage is a display of public respect. Or with regards to artistic works like movies, it is when the filmmaker references or imitates another filmmaker’s work. This can be done visually (eg. copying a movie scene’s style or action) or audibly (eg. imitating a recognizable musical score).
On occasion this game of “copycat” gets called out for simply ripping off a technique that another filmmaker has perfected. But it can also work as a tipping of one’s hat to another respected film or as a sly reference that only aspiring cinephiles may catch.
Let’s not overlook the fact that homages can also be done for creative effect. It can elicit a laugh from the audience, like a certain Psycho reference we’ll get to shortly, or to boost the emotional impact of a scene on the audience. However the catch is that the law of diminishing returns does apply here: the greater amount of times a movie or filmmaker gets referenced to in other works, the less effective it will be…and the more likely your audience will groan with disappointment when it is attempted over and over again.
To help you execute a well-placed and timely tribute to another filmmaker’s work, here are a few honourable (and dishonourable) examples of cinematic homages that you may recognize. As you’ll soon see, it can be achieved through visuals, score or even an overall style.
THE UNTOUCHABLES / BATTLESHIP POTEMKIN
If you attended an Intro to Film course in university, you likely watched this stunning scene from Battleship Potemkin (1925). If you skipped that course, then you would recognize it as the equally tense scene from The Untouchables (1987) in Prohibition Agent Eliot Ness’ loose portrayal of his campaign to take down legendary mobster Al Capone.
In the former, the climactic scene of the black-and-white, Soviet silent film depicts a runaway baby carriage down a flight of steps of Odessa as a distraught mother chases after it. This is all against the backdrop of an army quelling an uprising of sailors who have staged a mutiny. It’s incredibly tense even for a silent scene.
Brian De Palma got a hold of that scene and baked it into his climactic action set piece of his historical crime drama (very loosely based on history) as Eliot Ness and his partner ambush Al Capone’s men at a train station, with the agent frantically chasing down a runaway carriage caught in the crossfire.
Which one of these did it better? It’s a tough call, but De Palma gets the point for taking an already intense sequence, maintaining an almost-silent approach (musical score, gunshots and the creek of the carriage wheels only are heard) and builds upon it with the stakes raised for our hero character: kill the bad guys, save the baby, look badass with a shotgun in slow motion. Check, check and check.
You get bonus points of shame if you watched these two examples and recognized them as the opening scene of Naked Gun 33 ⅓. For shame!
Psycho (1960) is quite possibly the most oft-referenced movie in history. Who would have ever thought that this Hitchcock horror classic would pop up in the lighthearted, family-friendly animated adventure Finding Nemo (2003)?
Nemo is under threat of being given to a rambunctious child, Darla, who strikes fear in the hearts of fish everywhere for her tendency to accidentally kill her underwater pets. To us, she’s a child. To the residents of this particular fish tank, she’s a maniacal killer. So the buildup to her arrival means that she has to make a big entrance.
The door bursts open and we hear the sharp strings of the Psycho theme that indicate that danger has arrived. This sound is so universal that kids and parents alike recognize it, so everyone unmistakably knows what that sound brings: the horror that awaits Nemo if he doesn’t find a way to escape. And FAST.
This homage is played as much for laughs as it is to associate Darla with the menace that the fishes see when she arrives.
INGLOURIOUS BASTERDS / ONCE UPON A TIME IN THE WEST
One does not simply choose a clever cinematic reference in a Tarantino movie at random. His movies are seemingly jam-packed with several references to obscure films that this list might just include his entire filmography. But let’s highlight one of his more subtle ones in his WWII exploitation masterpiece, Inglourious Basterds (2009).
Tarantino is not always so soft-handed with his film references in his movies, where other filmmakers give a small nod to other works he vigorously shakes his head like a paintcan. However his opening scene to this sweeping WWII story begins with a style reminiscent of one of his most idolized directors, Sergio Leone. Specifically it recalls the incredibly slow-burn start to his opus Once Upon a Time in the West (1968).
In Leone’s spaghettit western epic, he opens with a drawn out introduction to our title character that takes place over 10 minutes almost devoid of dialogue. The passing of time makes the air so thick that when the scene finally culminates with our protagonist arriving, the musical score of strings slowly builds, and there is a flash of gunshots that shatters the tension.
Tarantino mimicked this opening with a similar pace, even starting with the title card “Once Upon a Time in Nazi-Occupied France” (the rumoured original title). The Nazis slowly approach the farm, our charming antagonist engages in a lengthy process of interrogation and observation before the scene explodes with an operatic crescendo.
As a filmmaker, Quentin has placed great emphasis on the use of sound and score to establish the environment of his scenes. He has constantly borrowed these elements from other films to achieve the desired effect when necessary, evidenced by the rest of his directorial credits. He’s not the only director out there who relies heavily on sound to set the mood; check out these other directors who creatively use sound in their films for great effect.
Both scenes serve to introduce characters masterfully (the hero in Leone’s film and the villain in Tarantino’s) and draw the audience in slowly, setting the stage for the rest of the story which we are assured will play out with as much intention as the opening.
Whether you see homages as a tool for ripping off material, enhancing your scene with a commonly-used tool or for parodying other works for a laugh, the truth is that cinema was built on them and will continue to for many years.
As a filmmaker you are influenced by movies that you grew up on, ones that horrified you, entertained you, struck you with awe and made you think. When you set out to craft your own stories for the screen, you draw from your well of cinematic knowledge that you created as an audience member.
This is why nostalgia is at a peak; cinema now is reflecting the movies that many of us grew up on for the last couple of decades. It’s speaking our language, reminding us of the stylistic choices that other directors implemented, the classic visuals that are still fresh in our minds and the musical themes that are resampled into something new.
In a way it shows respect to the filmmakers of the past and immortalizes their works for years to come. How else can we get the kids to watch Battleship Potemkin, which need I point out is a movie approaching its 100th birthday?
Dozens of audio editing apps are spread across the market. Some expensive heavyweights like Pro Tools and Nuendo are popular amongst top tier post-production professionals. At the other end of the spectrum, free app Audacity is prized amongst musicians. There are others: Ableton Live, Sound Forge, Cubase, and more. Each has their own uses and fans. One app, though, has become increasingly popular: Reaper.
The Reaper editing app was released in 2006. During the twelve years since it was revealed, more people have evangelized the cross-platform digital audio workstation (DAW). It’s common to hear excited professionals share how delighted they are after switching to Reaper.
Perhaps you’re considering this, too. Switching editing apps isn’t a simple task, though. Each app can be vastly different from its design to workflow to a single keystroke for making a simple edit.
Today’s post is meant to help ease the pain. It shares why people love Reaper, how it is different, and the best way to transition to working with this popular editing app.
Why People Are Switching to Reaper
Why switch to Reaper? After all, adopting a new editing app may mean days or even weeks of relearning simple editing tasks. That’s not appealing when project deadlines are approaching.
Just the same, many pros are finding the results are worth the price. Why?
The most common reason is value. Reaper offers an unlimited trial license. Once this has expired, users can pay $60 for a discounted licence, or $225 for a pro commercial license. There is no difference between the price tiers; both the discounted and commercial license share the same features. This is a stark contrast to the high prices and subscription plans of other software.
The app is portable, it has a small hard drive footprint, can be run from a USB flash drive and doesn’t need hardware copy protection such as a dongle.
The software itself is cross platform, and known to be fast, stable, and powerful, all while being light on system resources. The developers work fast and updates are released swiftly, all backed by excellent customer support when you need it, and helpful forums for those that want to make their own way.
Overall, Reaper is an app that is highlighted by its ability to provide choice. We’ll explore this more in a moment.
Pro Tools to Reaper
Many Reaper users are beginning after switching from Avid’s Pro Tools. What are some standout Reaper features that Pro Tools users will appreciate?
Pro Tools fans will find some significant changes. For instance, unlike the different flavours of Pro Tools, Reaper has no limit to the number of tracks and no limit to the amount of plug-ins per track. What’s more, tracks and the sounds on them are handled differently, too. Most notably, tracks aren’t limited to either midi or audio, and so on. Instead, Reaper can mix any type of media on one track. Tracks can mix different channel counts and sampling rates. A 44.1 kHz surround track can follow after a mono 96 kHz clip. Reaper is similarly flexible with its approach to plug-ins. With a bit of work, an endless amount of plug-ins can be added to a track, mixing both 64- and 32-bit plug-ins. Reaper also supports VST plug-ins. Pro Tools can do this as well by using a “plug-in wrapper”, however Reaper is able to do this natively.
Many Pro Tools wish lists dream about opening multiple sessions at one. Reaper delivers: it’s possible to open many projects at the same time in separate window tabs.
A short list of smaller perks:
Automate session playback rate.
A dedicated mono track button.
A dedicated flip phase track button.
Detailed exporting or “rendering” options.
Perhaps the most significant change for Pro Tools users is the amount of customization possible. Reaper can be tweaked with themes or “skins” to refresh the editing interface. Those wanting to dig deeper can use ReaScript to program scripts and extensions in EEL2, Lua, and Python languages. Reaper can be vastly modified to any taste.
What To Expect
Scratching your head when reading words like EEL2, Lua, and Python? You’re not the only one. While Reaper’s customization grants power to the tinkerers, those who prefer to begin simply and swiftly may find Reaper daunting. What parts of Reaper will Pro Tools users or audio editing newbies find challenging?
Pro Tools offers an arguably cleaner and polished interface. Of course, Reaper’s UI can be tweaked, but this requires a time investment or tracking down a pleasing theme. On the subject of clutter, Reaper creates a small “repeak” waveform files next to every sound file, which can muddle your sound library folders.
Similarly, Reaper’s menus and preference window are complex and a bit much to digest at first. A lot can be accomplished by right-clicking, however that may not be as intuitive to people arriving from other apps where commands are more apparent. And, while it is true that nearly any task can be automated with scripting, “action” programming, or mouse movement, this too takes some time to understand. Overall, Reaper’s power takes time and patience to unlock. That is the cost of Reaper’s customization: it is complex, and requires diligence to master.
Here are stock Pro Tools features missing from Reaper:
Importing and exporting AAF and OMF files.
Trim to selection.
Fade to cursor.
Adjusting to Reaper
It’s important to remember that most of the features above can be accomplished in Reaper with a bit of work. For example, the AATranslator app ($199) can perform AAF and OMF conversions for Reaper. Other clever programmers can create workarounds by using Reaper’s “Action” window to string together a number of keystrokes or commands.
Here are suggestions to help you adapt to Reaper a bit more smoothly.
First, some terminology:
A Pro Tools “session” is known as a “project” in Reaper.
Reaper’s “Media Explorer” is similar to Pro Tools’ “Workspace Browser.”
“Regions” are known as “media items” in Reaper instead.
In Reaper, “Regions” refer to a span of time between two markers.
The “Track Control Panel (TCP)” is the list of tracks on the left of the main editing window.
Pro Tools’ automation is handled with Reaper “envelopes.”
“Bouncing” is called “rendering” in Reaper.
There are others. Be prepared that familiar terms may have changed.
Some editing apps work best when media is collected in one location. Reaper is more flexible. Tracks, as well as the reapeak waveform files mentioned above, can be stored anywhere. Many pros tweak these preference to gather files in one place. This is optional, though.
Pro Tools users will find one of the largest adjustments in discovering how they navigate through a Reaper project. Scrolling, zooming, track resizing and so on are all different. Of course, these can be customized to whatever is preferred.
While Pro Tools plays whichever region is selected, Reaper works differently. Instead, the app plays from the playhead position, regardless of what audio is selected. This takes some adjustment, too.
Want to split a media file? It may seem natural to click where you’d like to cut. Perhaps you’d like to trim the beginning and end of a region to a shorter length. You’d be forgiven for thinking you’d select a media you want to change, then typing s to create an edit, or by selecting the Item/Trim items to selected area menu item.
With Reaper, it’s a two step process: first selecting the region or “media item” you want to change, then click in any other track to specify where this will be done. For example, to create an edit, select the media item you want to cut. Then, click in a track below to indicate where you’d like the slice to happen.
Trimming and Fading
Advanced Pro Tools users may use the Smart Tool to create fades, trim, edit, or move a clip. Reaper works similarly: fades are created by clicking and dragging the upper corner of clips, and trimming by dragging its left or right edge.
Mouse and Keyboard
Even loyalists admit Reaper’s default key command mapping and mouse actions are confusing. Customize your clicks and key presses to make life easier. How? Mouse actions are modified in the Preferences window. Changing keystrokes takes place in the action list. Both have endless options. Use the search field (Preferences) and filter (Actions) to track down the changes you’d like to make.
Making a Smooth Transition
Whether you need to swap one editing app for another or simply want to try something new, any fresh DAW will have its own learning curve. Adopting the Reaper app takes more effort than most. Its workflow requires adaptation: adjusting how tracks are moved and edited, how projects are viewed and navigated. Time and effort help ensure a smooth transition. The result? Expertise in a powerful and flexible value-packed app.