Technological advances have always had an impact on the development of new audio-visual aesthetics. Recently, exploiting the spatial capabilities of immersive sound technology in the form of Dolby Atmos, Alfonso Cuarón introduced in Gravity (2013) an innovative sound design approach that enhances the illusion of ‘presence’ in the space of the diegesis by always maintaining a coherent, realistic, and immersive representation of a given point-of-audition. Such sonic strategy – which we have termed immersive point-of-audition – provides a three-dimensional representation of the filmic space, localising sound effects, music, and dialogue in accordance to the position of the sources within the diegesis. In this paper, we introduce the definition and main characteristics of this emergent sound design approach, and using Gravity as an illustrative example, we argue that it has the potential of facilitating the processes of transportation and identification in cinema.
Nowadays, audio description is used to enable visually impaired people to access films. It presents an important limitation, however, which consists in the need for visually impaired audiences to rely on a describer, not being able to access the work directly. The aim of this project was to design a format of sonic art called audio film that eliminates the need for visual elements and for a describer, by providing information solely through sound, sound processing and spatialisation, and which might be considered as an alternative to audio description. This project is also of interest for the domains of auditory displays and sonic interaction design, as solutions need to be found for effectively portraying storytelling information and characters' actions through sound (not narration). In order to explore the viability of this format, an example has been designed based on Roald Dahl's Lamb to the Slaughter (1954) using a 6.1 surround sound configuration. Through executing the design of this example, we found that this format can successfully convey a story without the need either of visual elements or of a narrator.
Sonic interaction design studies how digital sound can be used in interactive contexts to convey information, meaning, aesthetic and emotional qualities. This area of research is positioned at the intersection of sound and music computing, auditory displays and interaction design. The key issue the designer is asked to tackle is to create meaningful sound for objects and interactions that are often new. To date, there are no set design methodologies, but a variety of approaches available to the designer. Knowledge and understandingofhow humans listen and interpret sound is the first step toward being able to create such sounds.This article discusses two original approaches that borrow techniques from film sound and theatre. Cinematic sound highlights how our interpretation of sounddependson listening modes and context, while theatre settings allow us to explore sonic interactions from the different perspectives of the interacting subject, the observer and the designer.
The voice is the most important sound of a film soundtrack. It represents a character and it carries language. There are different types of cinematic voices: dialogue, internal monologues, and voice-overs. Conventionally, two main characteristics differentiate these voices: lip synchronization and the voice's attributes that make it appropriate for the character (for example, a voice that sounds very close to the audience can be appropriate for a narrator, but not for an onscreen character). What happens, then, if a film character can only speak through an asynchronous machine that produces a 'robot-like' voice? This article discusses the sound-related work and experimentation done by the author for the short film Voice by Choice. It also attempts to discover whether speech technology design can learn from its cinematic representation, and if such uncommon film protagonists can contribute creatively to transform the conventions of cinematic voices.