note: This is an essay I wrote on film music for “Sound in Motion Pictures” class. I thought this would be an interesting read for you guys who are interested in the topic 😊 (warning: there are spoilers for the movies mentioned.)
Music in contemporary motion pictures remains a highlight for casual filmgoers – with many people recognizing themes from popular franchises like Marvel and Lord of the Rings, and others creating playlists of their favorite film music on Spotify or Youtube. However, considered on a more scholarly approach, film music often acts as a driver for film narrative. This is reflected in many ways and methods, but this essay focuses on how contemporary film still operates under the theoretical framework outlined by Claudia Gorbman in her 1987 book, Unheard Melodies: Narrative Film Music.
I have chosen the period of 2000-present to be considered as contemporary due to its importance as our turn-of-the-century era, bringing along gargantuan technological advancements and a shift in societal perspectives. This contemporary era of film contains music that still follows the principles laid out by Gorbman and creatively uses or subverts them as they are employed as narrative tools in their respective films.
I have endeavored to choose a wide sampling of films from different genres – from commercial to independent, all decades of the aforementioned era, and covering different target audiences (children, teenagers, and so on). With that said, this essay will explore how film music is used as a narrative tool in terms of blurring diegetic and nondiegetic boundaries, orienting the audience through intertextuality, revealing emotional and psychological depth, and providing narrative closure.
Music and the blurring of diegetic and nondiegetic boundaries
Film music can be categorized in two ways: either what comes from a visible source in the film (like onscreen musicians playing a musical piece that the main character hears), or that which does not occur naturally (i.e. the score) and does not come from the world of the film. Gorbman refers to scholars Genette and Souriau in building her definition of “diegesis,” which is “the space-time universe and its inhabitants referred to by the principal filmic narration” (p. 21) – in other words, the world within the film. Thus, diegetic music is “music that (apparently) issues from a source within the narrative.” (p. 22) Nondiegetic music is then that which does not come from the world, like the film score, as previously mentioned.
A clear example of how diegetic and nondiegetic music are blurred can be found in the Polish film, The Last Family (2016), which chronicles the tragic family life of renowned Polish painter Zdzisław Beksiński. In one scene, ominous classical music plays nondiegetically as Beksiński walks around his house in an artistic mood, and he eventually goes into his studio and turns the stereo off. The music stops at the same time.

During this scene, the viewer is somewhat tricked at first into believing that the music is nondiegetic due to its similarity in quality to the film’s score, up until the stereo is turned off by Beksiński. At this moment, the music has transformed into a diegetic entity because its source (the stereo) is apparently visible (although earlier the music did not seem to come from inside his house).
Robynn Stilwell (2014) introduces an argument against the concept of diegetic/nondiegetic: People can often recognize the blurring or crossing of these definitions in film, so “if this border is being crossed so often, then the distinction doesn’t mean anything” (p. 184). However, she objects that this frequency “does not invalidate the separation. If anything, it calls attention to the act of crossing and therefore reinforces difference” (p. 184). She adds, “the manner in which the meaning in the distinction multiplies and magnifies in the crossing is indicative of its power” (p. 200). In a way, she is saying that the meaning of narrative can change once it passes through the threshold between diegetic/nondiegetic and can be multiplied (i.e. give rise to different interpretations) and emphasized.
This confusion or multiplicity of meaning due to the blurring of diegetic/nondiegetic can be seen in Inception (2010). In the beginning of the film, the song “Non, je ne regrette rien” by Edith Piaf is chosen by Cobb’s team as a signal to dreaming characters that they must or will soon wake up. The first usage of this song is for their architect, Nash, to be notified by an outside agent that they are nearing the end of the dream. The viewer can understand this as a diegetic event due to a Japanese boy pressing ‘play’ on the sleeping Nash’s MP3 player, and the song is subsequently played on his headset (Engel & Wildfeuer, 2015, p. 234). The song also plays within the dreamworld, albeit in a slowed down version with a brass section – thus, the Piaf piece is interpreted as a “diegetic song within the storyline that lets the protagonists know that they will soon wake up from a dream level” and “is discussed by the characters as a specifically chosen element to be recognised by them during their dreaming” (Engel & Wildfeuer, 2015, p. 234).

However, at the film’s climax, when the team is scattered in different dream levels, “a very slow version of ‘Non, je ne regrette rien’ as well as some single parts of the slowed down version can be heard…” and it is no longer clear “whether they are diegetically used and can thus be recognised by the protagonists as signals” (Engel & Wildfeuer, 2015, p. 239), since the source of the music is not shown and it blends in with the musical score. The song is interspersed throughout the sequence taking us through labyrinth of different dream-levels and “time periods” within those levels, all while Cobb’s journey to the lowest dream level transpires. Characters sometimes do not react to the song as it plays, although some take it as a signal to “wake up” to a higher level in the dreamworld.

This un-acknowledgment causes the audience not to rely on the song to tell whether the characters can hear the song or see its function as a signal. This also contributes to the theme of confusion in the film that asks whether a character is in a dream or in reality, as “it doesn’t orientate the audience, neither locally nor temporally” (p. 242). Rather, Engel and Wildfeuer posit that “emotional orientation becomes the dominant function of the second instance,” since the slowed-down version of the song also is heard in “other, emotional significant contexts throughout the film” (p. 244).
Intertextuality in music
Thanks to the wealth of film history that our era can now access from the past century, contemporary films can refer to the past in order to build their own worlds. The animated film The Incredibles, sprouting from the computer-animation movement that began in the 90s, has a jazzy score which immediately places the film in a specific 20th-century aesthetic, evoking 1960s James Bond films, Mission Impossible, and Star Trek, while simultaneously building its unique superhero world and serving as a genre homage (Cornell, 2021).

Michael Giacchino’s score heavily features big band jazz elements, driving bass lines and pulsing horns, with the villain-pieces in a minor key being reminiscent of the spy genre (Cornell, 2021). It goes hand-in-hand with the mid-century modern, retro-future design of the world’s setting. However, this homage to actual media from the mid-twentieth century probably went over the heads of its target audience (children born in the 2000s or a few years before), due to them likely having not seen the tradition of spy films from that era. The music as a worldbuilding tool will likely only be understood by older viewers.
Music as a reflection of emotional & psychological depth
Gorbman writes, “Music appears as a signifier of emotion” (p. 79) and quotes Sabaneev as describing “the image-track, dialogue, and sound effects as ‘the purely photographic,’ objective elements of film,” while music is needed to bring the subjective side of the story, whether it be in an “emotional, irrational, romantic, or intuitive dimension” (p. 79). One of her principles is “inaudibility,” which states that “Music is not meant to be heard consciously [by the viewer]… it should subordinate itself to dialogue, to visuals — i.e. the primary vehicles of narrative,” (p. 73). This can be subverted when in service of expressing emotion. Music can express the inexpressible, when mere dialogue falls short.
Manchester by the Sea (2016) finds Lee Chandler grappling with the grief of losing his brother and having to adopt his nephew, Patrick. In a flashback, it is revealed that he lost his three young children (one of them an infant) to a fire that burned his house down late at night due to his negligence. In the morning, as the fire has been put out, his brother Joe has arrived to comfort him amid the crowd and first responders that have gathered around him. Music deliberately drowns out the dialogue, expressing the unspeakable experience of Lee spotting the firefighters bringing out the body bags of his children’s remains.

This method is employed again in another scene taking place in the present, during Lee’s brother’s funeral, where it seems that his bereaved nephew, Patrick, is the main focus, as Lee stands by the side. However, the weight of grief becomes focused on Lee as his ex-wife Randi enters, visibly pregnant, accompanied by her new partner. Randi’s expression is also grief-stricken and compassionate, and she approaches Lee to embrace him after introducing her new partner. As she and her partner leave to sit on the pews, Lee hangs his head. All the while, dialogue is muted over the strains of mournful chorale music.

This choice to block out dialogue as a “primary vehicle of narrative” reveals the priority given to expressing Lee’s emotions – we can surmise that he remembers the children and house he lost together with his ex-wife, deep regret for his failed marriage, and consternation at how Randi has a new life now, starting anew with a new child on the way, albeit with another man. This unbearable weight of grief is expressed to the viewer somewhat internally, with the music revealing the inner turmoil that lies beyond the surface of the visual representation of Lee’s character.
In contrast to superseding dialogue to express inexpressible emotions, film music also serves another purpose, in that it fills “empty spaces in the action or dialogue” and “annihilates the silences without directing special attention to itself” (Gorbman, p. 89). This is illustrated in Me and Earl and the Dying Girl (2015), in which the teenage Greg is forced by his parents to befriend his old classmate, Rachel, who is dying from cancer. They become genuine friends, and Greg takes on a project with his filmmaker partner, Earl, to interview people and gather their messages for Rachel in a short film. As Rachel’s cancer worsens, Greg abandons his long-awaited prom date in order to visit Rachel in the hospital, bringing a final cut of his film.

The sequence at the hospital begins with no music, only dialogue and diegetic sound, as Rachel’s mother leaves the two of them alone in the hotel room. As Greg shows Rachel – who is not able to speak anymore – his film through a mini-projector, they lie down silently and watch, not knowing that these are her final moments.
The music fills in the lack of dialogue, which is a mix of a driving beat, with the organ and electronic instruments in an experimental, somewhat triumphant style by the composer Brian Eno. There is minimal emotion on the characters’ faces as the film plays. The viewer realizes along with Rachel that Greg does not show her the “documentary” he was making for her, but rather, he chose to come with an experimental, but very personal, film for her. Rachel tears up as Greg watches nonchalantly beside her, as the music continues to flood the room.
The nondiegetic music is broken by Rachel making a noise, which causes Greg to ask if she needs the nurse, and he calls her mom. However, the music returns to center stage once her mom and the nurses arrive, blocking out the dialogue; ostensibly marking Greg’s subjectivity of the moment and expressing his distress.
This scene shows how music, first of all, provides the viewer with an interpretation of the characters’ emotions by filling in the silence of the hospital room. It also fills the emotional void left by Rachel’s inability to speak and Greg’s unspoken emotions and nonchalance. Stilwell writes that “[w]hen the music takes the foreground, it can, literally and metaphorically, seem to spill out over/from behind the screen and envelop the audience, creating a particularly intense connection.” (p. 197). As with Manchester by the Sea, the music in this sequence, especially in the moments that it rises and swells, serves as a tool to help the audience bear the unbearable weight of suffering that Rachel and Greg carry, allowing us to empathize deeply with them.
Another way that film music is used as a tool for empathy is through character themes or leitmotifs. On the definition of leitmotif, Engel & Wildfeuer reference Gorbman: “If [the theme] ‘remain[s] specifically directed and unchanged in [its] diegetic associations’ (1987: 27), it is a motive.” (p. 242). Oppenheimer (2023) features the theme “Can You Hear the Music” for the main character, J. Robert Oppenheimer, which is composed of rising and falling notes played frantically on the violin, with descending synths; gradually increasing in tempo. The theme frequently appears during his dream sequences and his scientific milestones and emotional developments.

Stilwell talks about how music builds a feeling of empathy between a character and individual audience members: while watching, we tend to just judge whether this connection is weak or strong, in the context of talking about “subjectivity” (p. 197). The combination of Oppenheimer’s dream sequences with the loudness of the theme, as well as intercut close-ups of his face, foster a sense of empathy, in spite of his character going through many ethical dilemmas in his love life and as a scientist, who has the choice to kill or not to kill, with the nuclear weapon he’s tasked to create.

Providing narrative closure through music
The last two rules outlined by Gorbman are “Formal and Rhythmic Continuity,” where she mentions that “[m]usic functions as spatiotemporal connective tissue” (p. 90) and, through the principle of “Unity,” it “provides musical recapitulation and closure to reinforce narrative closure” (p. 90) – in a way, bridging the audience out of the story and back into their real world, with a sense of satisfaction at the story’s close.
These aspects can be seen in Tenet (2020) – where, during the end credits, we hear a Travis Scott song made for the movie. As director Christopher Nolan recounts, after Travis Scott finished his song during the postproduction stage, Nolan listened to the song and decided to take the film out of picture lock so composer Ludwig Göransson could intersperse his score with Travis Scott’s vocalizations (Cortex, 2020). The intended effect was to create a subconscious familiarity throughout the film, which culminates in the end-credits song (Cortex, 2020). Travis Scott’s vocalizations can also be heard at the opening titles, solidifying Gorbman’s concept of how opening and closing music surround “the film in a musical envelope” and provide “narrative closure” (p. 90). As the “thematic score provides a built-in unity of statement and variation as well as a semiotic subsystem” (Gorbman, 1987, p. ), by putting Travis Scott’s vocalizations in different points of the movie, it ends up as a semiotic signifier that threads through the whole movie, giving the viewer a sense of unity just by listening to the film music from beginning to end.

Conclusion
In our contemporary age, music has been used to serve under Gorbman’s theoretical framework in a diverse set of ways, giving rise to a multitude of interpretations for the viewer to enjoy or to sift through. Within commercial and independent films, contemporary filmmakers have continued to take up the challenge of blurring lines between diegetic and nondiegetic music in order to express their desired meanings, as well as using music as signifiers of place through the use of intertextuality. They have also utilized silence in terms of dialogue in order to prioritize music, or vice versa – by letting music fill in the gaps of silence between characters in their diegetic world, and using leitmotifs to encourage empathy with difficult characters. Music has also been used to provide narrative closure in intelligent ways.
As technology and stories continue to become more complex, the use of music as a narrative tool will likely break more barriers in terms of exploring creative ways to go around Gorbman’s framework, further enhancing the human experience of consuming film.
Bibliography
Cornell, C. [Charles Cornell]. (2023, March 15). The incredibly incredible soundtrack of The Incredibles [Video]. YouTube. https://www.youtube.com/watch?v=5bRp6_iGTSY&t=115s
Cortex. (2020, August 6). Christopher Nolan on TENET – The full interview [Video]. YouTube. https://www.youtube.com/watch?v=_Woppb0k_2M
Engel, F., & Wildfeuer, J. (2015). Hearing music in dreams: Towards the semiotic role of music in Nolan’s Inception. In J. Furby & S. Joy (Eds.), The cinema of Christopher Nolan: Imagining the impossible. Columbia University Press.
Gorbman, C. (1987). Unheard melodies: Narrative film music. Indiana University Press.
Stilwell, R. J. (2014). The fantastical gap between diegetic and nondiegetic. In D. Neumeyer & J. Buhler (Eds.), The Oxford handbook of film music studies. Oxford University Press.