Some possibilities for data-driven analysis of multimodal narratives
- Metallurgy and Materials (GA03)
- Tuesday 26 January 2016 (16:30)
- English Language Research (ELR) seminar series
Speaker: Andrew Salway (Uni Research, Bergen)
Audio description is a spoken description of on-screen action that accompanies films and television programs so that they become accessible for visually impaired and blind people. Before it is recorded, audio description is scripted following guidelines as to its content and form. The specific use it serves, and the constraints on its content and form that follow, mean that audio description is interesting as a language for special purpose. It is also interesting as a kind of “collateral text”, cf. film scripts, subtitles and plot summaries. I will argue that collateral texts make the multimodal narrative functioning of films and television programs amenable for automated data-driven corpus analysis in ways not possible through automated analysis of the pixels and sound waves in video data.
The talk will present an analysis of a corpus of audio description which focused on how unusually frequent linguistic forms reflect the purpose of audio description, i.e. to provide sufficient information about on-screen action for a visually impaired or blind person to be able to follow the story. These linguistic forms – which relate to information about characters’ appearances, focus of attention, interactions and emotional states – will also be discussed with regards to how a data-driven analysis of audio description (and other collateral texts) may elucidate the functioning of multimodal narratives. As part of this discussion, I will present a new data-driven technique for inducing salient linguistic structures from corpora – local grammar induction – that is intended to complement current techniques such as keywords, n-grams and collocations.
Venue: Room GA03, Metallurgy and Materials Building