Building the Sweet Suite

It is August 2019. I have just started teaching video at Delta State University in Cleveland, Mississippi. I meet with the Art Department Chair and pitch an idea: we should create a “postproduction suite” in an empty office. I show him photographs Blackmagic Design uses to promote DaVinci Resolve software. These are, we recognize, ridiculous: serious-looking editors and colorists using the software in a range of beautiful spaces, with producers looking on. These images show a glamorous world, far removed from the cramped basement editing closets and exhausted nightshift editors of reality.

In the weeks that follow we research and design a postproduction space, emphasizing calibrated color grading and audio monitoring so our students can prepare their work to meet professional broadcast standards. The computers, monitors, control panels, and software all are quickly installed. For a brief moment, everything is perfect.

We begin bringing students into the space in small groups. The plan is to teach color correction basics in large fundamentals classes, then work in here with advanced classes. By Spring, students are using Adobe Premiere and DaVinci Resolve in this darkened room, turning the wheels of a Tangent Element Panels Kit, color grading at a professional level.

In January 2020, a survey of filmmakers at the Sundance Film Festival claims DaVinci Resolve software was used for color grading on 91% of the films screened, and Adobe Premiere was used for editing 65% of the films.[1] Our approach is confirmed. As the term unfolds, our students’ work is looking better than ever. A student wins an award in a state competition; others showcase their work in our annual exhibition. In March 2020, the Chair and I meet in the Digital Media Arts Center. The Postproduction Suite has been a success. In the days that follow, COVID-19 cases increase in Mississippi, and quarantine begins. We start teaching online, experiencing a new digital divide: students and professors are seeing and hearing everything differently.

The Historic Shift from Color Timing to Color Correction to Color Grading

Before DaVinci Resolve took over the world, working with color was seen by filmmakers as a mysterious alchemy. Imagine you filmed—with real film, in a motion picture camera—a scene at the beach. The negative from your camera would be developed at a film laboratory. Your editor would cut—literally cut, and then glue back together—clips from a work print that had been “printed” from your original negative. Now imagine watching your scene and realizing your script described a golden sunset but your cinematographer delivered material that looked like a cold winter morning.

Traditionally, you would address this through a process called “color timing.” You would meet with a film lab technician and watch the film while adjusting the light used in projecting it. Instead of a neutral white projector light, calibrated filters would be applied so you could see how you might change the look of your film. The decisions made in this session would be used to make a new film print, appropriately warmer in color. The expense of this approach, and the externalization of control over the color process, left color decisions primarily to a film’s cinematographer. Postproduction color work was for subtle refinements, or for correcting less-than-optimal results as in our imagined beach scene.

In parallel to film production approaches, however, broadcast television developed its practice in ways that created a digital video and computer editing revolution and eventually challenged the film-based approach to color. The key requirement was the shift to the use of digital media—files—as the actual material for a project rather than as a proxy or reference that told us what film frames to conform from our original negative. By the mid-nineties you could adjust the color in a digital photograph with Adobe Photoshop, and it became clear that color adjustment in video production would become practical as soon as faster hardware was available.

Recording to video tape had always been considered inferior to the quality film offered. Yet once it became possible to digitize tape-based material or import material recorded in-camera using a digital format, producers saw the power inherent in “desktop nonlinear editing.”

Digital color correction was an obvious application. Our beach scene could be adjusted—as a digital file—to be warmer in color. Editing software began including effects for adjusting color and tonality.

Digital color grading came next, a more comprehensive approach considering not just the color in single shots, but how color works throughout a project.

Hollywood cinematic production resisted this approach until the process used for O Brother, Where Art Thou (Joel and Ethan Coen, 2000) proved that a “digital intermediate” could work beautifully.[2] This involved shooting on film, digitizing the film into media files, color correcting in software, and finally exporting out to a film print for screening.

Two decades later, the concept of “color timing” makes little sense to students, since they have worked with Adobe Photoshop and understand the simplicity of changing visuals in that program. They quickly understand Adobe Premiere’s “Lumetri Color” adjustments, and the related options Resolve offers them. They no longer think of media files as connected to some original reel of film. If film is shot, the digital scan of that film is now “the original.” If only digital is shot, then the files from the camera are now the source material. There is a tendency, then, to think that once we have files, they are infinitely malleable. The truth is, even without the use of film, there is a production chain and color and tonality decisions are made at every stage of the process.

Understanding Problems in the Remote Production and Review Chain

In our postproduction suite, the goal was to view our work on a calibrated screen in an environment with appropriately neutral lighting. We wanted to trust what we were seeing. A quarantine situation makes that standard seem impossible. Sending everyone home to edit and color correct taught us that perception is changed by the light in your editing space, the calibration of your screen, the size of your screen, and the method you use to listen to audio. These factors must be considered throughout a project, from camera to final critique. In May of 2020, I briefly thought: I should re-create the “Sweet Suite” at home. Then I realized that if students cannot work in a controlled way, my controlled space only allows better critique. It does not help a student’s work or learning. I began to consider, instead, a “chain” of production concepts that would be more useful.

CHAIN LINK ONE: SCOPES

Do not start your online teaching with the camera, or the screen, or with recommended viewing conditions. Start with the concept of the “scope.” Like a Platonic ideal, within your video software there are scopes that are absolutely true, perfect, and objective. If we understand what these scopes reveal, we have a measurement that works for both student and professor, and which leads to work that matches broadcast standards.

Key Principles: Ignore “histograms” and instead use software that allows you to view a Waveform Luma scope for tonality and an RGB Parade scope for color balance (these are available in the free version of DaVinci Resolve). While students have varied access to cameras, remote production with any camera can be based on scopes if students use phone cameras for reference. Apps like Mavis and FiLMiC Pro are excellent tools for recording video on a phone, but more importantly both allow a student to see scopes while pointing the camera at a scene.

Key Actions: Have students consider production as starting with the evaluation of each shot on scopes. Have them post screen captures showing a shot viewed with a phone app. This can reveal the information seen on the scopes, as well as selected camera settings. If your student is going to present an audiovisual essay to camera, have them send a test screen capture (showing scopes) well before they are going to record the presentation. This first link in the chain requires students to use scopes in postproduction to guide their color and tonality adjustments.

Teachers must also develop their knowledge regarding scopes:

Use an appropriate phone camera app, learn how to view the Waveform, learn how to adjust the exposure, and learn how to set the color temperature and tint. In postproduction software, learn to view a Waveform Lumascope. This shows, in white lines against a dark backgound, the tonality information in a shot. The scope runs from 0% (solid black) to 100% (solid white) or 0 to 1023. Notice the light tones, the midtones, and the dark tones. Notice anything overexposed at 100 or “crushed” to 0 (Figure 1).

Figure 1. A Waveform Luma scope in DaVinci Resolve.
Figure 1. A Waveform Luma scope in DaVinci Resolve.

In postproduction software, learn to view an RGB Parade scope. This will show, in red, green, and blue lines, the relative levels of these three primary colors. Compare red levels to blue and you are seeing an indication of the “color temperature” of the shot. Compare the green level to red and blue, and you are seeing the “tint” of the image—what we think of as the green/magenta color cast (Figure 2).

Figure 2. An RGB Parade scope in DaVinci Resolve.
Figure 2. An RGB Parade scope in DaVinci Resolve.

CHAIN LINK TWO: “F.A.C.E”

If one can easily change exposure or color temperature later, a student asks, how important is it to be “perfect” in recording a shot? Why not use Auto White Balance, Autoexposure and Autofocus? The ability to make adjustments in postproduction tends to result in sloppy approaches to recording material. Students should be taught to concentrate on preserving essential detail when filming. To communicate that, the acronym “F.A.C.E.” can be taught.

F = Focus. Sharpening can be applied to an image in post, but it is difficult to repair shots that are out of focus. Often students forget to concentrate on sharp focus on the eyes, for example. Small viewing screens contribute to this problem. Sometimes a student is happy with a project until, when seeing it larger, they realize they have missed focus.

A = Aperture. Once a student can focus appropriately, they should then learn to set the lens aperture to control the shot’s depth of field. This is simple on a DSLR or cinema camera, and is an essential production technique. Production students should fully explore this; other students should learn that aperture influences depth of field (cell phone cameras, unfortunately, cannot change aperture, so students recording on phones must understand this limitation).

C = Color. While “color correction” makes it sound as if we have ultimate control of color in our postproduction tools, it is still essential for students to understand the basics of setting color temperature in camera. At minimum, students need an understanding of using the common color temperature settings of 3200K and 5600K. If an equipment budget is available, using a grey card or color checker allows custom setting of white balance. Recording a test shot with this card visible is useful for postproduction.

E = Exposure. Once students are taught about scopes, they can use these to understand what they are recording, and what detail is NOT being preserved. Is the scope showing a window or the sky at 100%, burning out? Is a shadow area hitting 0%, losing all texture? Preserving detail and texture can be made a priority, resulting in a very usable image. Confirming exposure on scopes should be made part of the student’s thought process.

Key Principles: Emphasize checking and confirming each component of the “F.A.C.E.” acronym before recording. With some cameras and smart phone apps, students can turn on “Focus Peaking” to see if the eyes are in focus. With a waveform, students can see if anything is overexposed or if there is a color cast. While the “F.A.C.E.” acronym is essential to video production, it also applies to a student preparing an audiovisual essay or a class presentation. A self-presentation should be in focus, appropriate in color temperature, and accurately exposed.

Key Actions: Give video production students “technical exercise” assignments to prove their control over these technical aspects before they are engaged in longer projects. Give media students in less-technical courses a simple “test shot” assignment before they craft a video essay.

Teachers should:

Learn to set White Balance to match the light in a given situation. Emphasize standard color temperature settings like 3200K and 5600K. Learn how to use Custom White Balance with a calibrated grey card or color checker.

Learn and demonstrate the characteristics of “bad” shots. Record video with the incorrect color temperature. Record additional overexposed or underexposed shots. Show your students what these shots look like on a Waveform or RGB Parade scope.

Develop downloadable guidance on a recommended set up for students self-recording audiovisual presentations. Test this set up, and refine it so students can reliably get material that is in focus, exposed well, and recorded with neutral color.

CHAIN LINK THREE: BROADCAST AUDIO STANDARDS

The “F.A.C.E.” acronym emphasizes gathering visuals in a controlled, accurate manner. Similarly, students should be taught to record audio in the cleanest way possible. Record interviews with peak levels between -20 and -12 decibels. Bring microphones as close as is reasonable. Control background noise. Beyond these standard production guidelines, however, they need a framework to understand how to proceed from recording to final audio tracks.

Just as visual scopes allow us to check our video against objective standards, we can check our audio levels against established broadcast television standards. Both Adobe Premiere and DaVinci Resolve offer presets for measuring audio levels in the ATSC A/85 standard (for broadcast in the U.S.) and the EBU R128 standard (for U.K. programs).

In Adobe, have students complete any audio cleanup, equalization, and general sound mixing first. Then, use the “Loudness Radar” tool (as directed in Adobe’s tutorial).[3]

This process will reveal “Average Program Loudness” for the video. If one is targeting the ATSC A/85 standard, then the goal would be to achieve -24 LKFS or just slightly quieter. After a first pass with the Loudness Radar, a student would adjust audio track volume levels and possibly use keyframes to adjust sections of the project as needed (Figure 3).

Figure 3. The Loudness Radar in Adobe Premiere.
Figure 3. The Loudness Radar in Adobe Premiere.

In DaVinci Resolve, in the Fairlight section of the program, there is a “Loudness” meter that can be set to any broadcast preset. Instead of showing a number in LKFS or LUFS, it is instead set so that the “Integrated” meter target is zero. After a first pass, one should raise or lower audio levels to get a result at “0” or just slightly quieter.

Key Principles: Targeting a broadcast standard and checking this on a Loudness meter is essential for evaluation of student work. Audio levels are perceived subjectively and inconsistently, especially on laptop speakers or earbuds. Working toward broadcast standards allows students to understand expectations for professional work.

Key Actions: Provide students with an audio file. Have them adjust it to broadcast standards. Require a screen capture showing the Loudness Radar or Loudness Meter after adjustment.

CHAIN LINK FOUR: ONLINE CRITIQUE

In Fall 2020, I taught online. I was hesitant to critique work through Zoom meetings. I imagined the known problems would be amplified. We were on uncalibrated laptops, in improper viewing environments. I expected a disaster. Instead, we experienced only minor problems.

Our videos looked different in color online, but only slightly. The issue of scale was more significant. There is a difference in seeing a video projected large versus on a laptop screen during a Zoom meeting. Audio was our worst problem. Many students were on laptop speakers. Several switched to quality headphones, and this greatly improved the result. Ironically, screen calibration—the issue that inspired this paper—mattered less than expected. We saw minor problems, but our discussions were great. We compared our perceptions of the work. Distrusting our monitors became a teachable moment. From this experience, I developed two strategies.

Strategy One: record video using scopes, working to preserve detail, let scopes guide postproduction, then make final adjustments by eye. The final step here recognizes that color grading is about making aesthetic judgments. Colorists develop a personal style and rely on their own perception. If we cannot work in perfect conditions, then let scopes guide us until a final step. Then trust the student’s eye. If we lock entirely onto scopes as the final arbiter, we lose the important moment where a student embraces the role of decision maker. It is more important to keep this than to have a “perfect” video.

Strategy Two: create a “Senior Producer” role. There is a tendency to treat video as an assignment from an Art Director. We submit a draft, then the teacher tells us how to “perfect” it. Instead, assign students into a technical critique role, evaluating sound and color on another student’s draft. For student presentations, prompt feedback on a presentation’s look and sound. Shift from the teacher as absolute judge to a distributed process. There is no single absolutely correct opinion on color, tonality, or cinematic aesthetics. There are many right answers.

Adobe now promotes a service called Team Projects. Beyond allowing remote editing, this facilitates collaboration between Adobe Premiere editors and Adobe After Effects motion graphics artists. Team collaboration is the future. Professors and students should learn from our current challenges, as changes in the industry and culture of video production point to decentralized production. The “Sweet Suite” is moving to your home office.


Ted Fisher is an Assistant Professor in the Art Department at Delta State University in Cleveland, Mississippi. He has an M.F.A. in Photography from Claremont Graduate University and an M.F.A. in Film Directing from the University of Edinburgh.


    1. “A Cut Above: The Editing Tools Behind the Films at Sundance,” No Film School, January 23, 2020, accessed September 29, 2020, https://nofilmschool.com/editing-tools-films-sundance.

    2. “O Brother, Where Art Thou?” IMDb, accessed September 29, 2020. https://www.imdb.com/title/tt0190590/. 

    3. “Measure Audio Using the Loudness Radar Effect,” Adobe, accessed September 29, 2020. https://helpx.adobe.com/premiere-pro/using/loudness-radar.html.