WICED 2022

Permanent URI for this collection

Reims, France, 28 April 2022
Movie Style Annotation and Analysis
Using Advene to Bridge the Gap Between Users and Ontologies in Movie Annotation
Olivier Aubert
Evaluation of Deep Pose Detectors for Automatic Analysis of Film Style
Hui-Yin Wu, Luan Nguyen, Yoldoz Tabei, and Lucile Sassatelli
The Prose Storyboard Language: A Tool for Annotating and Directing Movies
Rémi Ronfard, Vineet Gandhi, Laurent Boiron, and Vaishnavi Ameya Murukutla
Intelligent and Virtual Cinematography
Framework to Computationally Analyze Kathakali Videos
Pratikkumar Bulani, Jayachandran S, Sarath Sivaprasad, and Vineet Gandhi
Consistent Multi- and Single-View HDR-Image Reconstruction from Single Exposures
Aditya Mohan, Jing Zhang, Remi Cozot, and Celine Loscos
(Re-)Framing Virtual Reality
Rémi Sagot-Duvauroux, François Garnier, and Rémi Ronfard
Film Editing and Directing
Real-Time Music-Driven Movie Design Framework
Sarah Hofmann, Maximilian Seeger, Henning Rogge-Pott, and Sebastian von Mammen

BibTeX (WICED 2022)
@inproceedings{
10.2312:wiced.20222004,
booktitle = {
Workshop on Intelligent Cinematography and Editing},
editor = {
Ronfard, Rémi
and
Wu, Hui-Yin
}, title = {{
WICED 2022: Frontmatter}},
author = {
Ronfard, Rémi
and
Wu, Hui-Yin
}, year = {
2022},
publisher = {
The Eurographics Association},
ISSN = {2411-9733},
ISBN = {978-3-03868-173-1},
DOI = {
10.2312/wiced.20222004}
}
@inproceedings{
10.2312:wiced.20221047,
booktitle = {
Workshop on Intelligent Cinematography and Editing},
editor = {
Ronfard, Rémi
and
Wu, Hui-Yin
}, title = {{
Evaluation of Deep Pose Detectors for Automatic Analysis of Film Style}},
author = {
Wu, Hui-Yin
and
Nguyen, Luan
and
Tabei, Yoldoz
and
Sassatelli, Lucile
}, year = {
2022},
publisher = {
The Eurographics Association},
ISSN = {2411-9733},
ISBN = {978-3-03868-173-1},
DOI = {
10.2312/wiced.20221047}
}
@inproceedings{
10.2312:wiced.20221048,
booktitle = {
Workshop on Intelligent Cinematography and Editing},
editor = {
Ronfard, Rémi
and
Wu, Hui-Yin
}, title = {{
The Prose Storyboard Language: A Tool for Annotating and Directing Movies}},
author = {
Ronfard, Rémi
and
Gandhi, Vineet
and
Boiron, Laurent
and
Murukutla, Vaishnavi Ameya
}, year = {
2022},
publisher = {
The Eurographics Association},
ISSN = {2411-9733},
ISBN = {978-3-03868-173-1},
DOI = {
10.2312/wiced.20221048}
}
@inproceedings{
10.2312:wiced.20221046,
booktitle = {
Workshop on Intelligent Cinematography and Editing},
editor = {
Ronfard, Rémi
and
Wu, Hui-Yin
}, title = {{
Using Advene to Bridge the Gap Between Users and Ontologies in Movie Annotation}},
author = {
Aubert, Olivier
}, year = {
2022},
publisher = {
The Eurographics Association},
ISSN = {2411-9733},
ISBN = {978-3-03868-173-1},
DOI = {
10.2312/wiced.20221046}
}
@inproceedings{
10.2312:wiced.20221049,
booktitle = {
Workshop on Intelligent Cinematography and Editing},
editor = {
Ronfard, Rémi
and
Wu, Hui-Yin
}, title = {{
Framework to Computationally Analyze Kathakali Videos}},
author = {
Bulani, Pratikkumar
and
S, Jayachandran
and
Sivaprasad, Sarath
and
Gandhi, Vineet
}, year = {
2022},
publisher = {
The Eurographics Association},
ISSN = {2411-9733},
ISBN = {978-3-03868-173-1},
DOI = {
10.2312/wiced.20221049}
}
@inproceedings{
10.2312:wiced.20221050,
booktitle = {
Workshop on Intelligent Cinematography and Editing},
editor = {
Ronfard, Rémi
and
Wu, Hui-Yin
}, title = {{
Consistent Multi- and Single-View HDR-Image Reconstruction from Single Exposures}},
author = {
Mohan, Aditya
and
Zhang, Jing
and
Cozot, Remi
and
Loscos, Celine
}, year = {
2022},
publisher = {
The Eurographics Association},
ISSN = {2411-9733},
ISBN = {978-3-03868-173-1},
DOI = {
10.2312/wiced.20221050}
}
@inproceedings{
10.2312:wiced.20221051,
booktitle = {
Workshop on Intelligent Cinematography and Editing},
editor = {
Ronfard, Rémi
and
Wu, Hui-Yin
}, title = {{
(Re-)Framing Virtual Reality}},
author = {
Sagot-Duvauroux, Rémi
and
Garnier, François
and
Ronfard, Rémi
}, year = {
2022},
publisher = {
The Eurographics Association},
ISSN = {2411-9733},
ISBN = {978-3-03868-173-1},
DOI = {
10.2312/wiced.20221051}
}
@inproceedings{
10.2312:wiced.20221052,
booktitle = {
Workshop on Intelligent Cinematography and Editing},
editor = {
Ronfard, Rémi
and
Wu, Hui-Yin
}, title = {{
Real-Time Music-Driven Movie Design Framework}},
author = {
Hofmann, Sarah
and
Seeger, Maximilian
and
Rogge-Pott, Henning
and
Mammen, Sebastian von
}, year = {
2022},
publisher = {
The Eurographics Association},
ISSN = {2411-9733},
ISBN = {978-3-03868-173-1},
DOI = {
10.2312/wiced.20221052}
}

Browse

Recent Submissions

Now showing 1 - 8 of 8
  • Item
    WICED 2022: Frontmatter
    (The Eurographics Association, 2022) Ronfard, Rémi; Wu, Hui-Yin; Ronfard, Rémi; Wu, Hui-Yin
  • Item
    Evaluation of Deep Pose Detectors for Automatic Analysis of Film Style
    (The Eurographics Association, 2022) Wu, Hui-Yin; Nguyen, Luan; Tabei, Yoldoz; Sassatelli, Lucile; Ronfard, Rémi; Wu, Hui-Yin
    Identifying human characters and how they are portrayed on-screen is inherently linked to how we perceive and interpret the story and artistic value of visual media. Building computational models sensible towards story will thus require a formal representation of the character. Yet this kind of data is complex and tedious to annotate on a large scale. Human pose estimation (HPE) can facilitate this task, to identify features such as position, size, and movement that can be transformed into input to machine learning models, and enable higher artistic and storytelling interpretation. However, current HPE methods operate mainly on non-professional image content, with no comprehensive evaluation of their performance on artistic film. Our goal in this paper is thus to evaluate the performance of HPE methods on artistic film content. We first propose a formal representation of the character based on cinematography theory, then sample and annotate 2700 images from three datasets with this representation, one of which we introduce to the community. An in-depth analysis is then conducted to measure the general performance of two recent HPE methods on metrics of precision and recall for character detection , and to examine the impact of cinematographic style. From these findings, we highlight the advantages of HPE for automated film analysis, and propose future directions to improve their performance on artistic film content.
  • Item
    The Prose Storyboard Language: A Tool for Annotating and Directing Movies
    (The Eurographics Association, 2022) Ronfard, Rémi; Gandhi, Vineet; Boiron, Laurent; Murukutla, Vaishnavi Ameya; Ronfard, Rémi; Wu, Hui-Yin
    The prose storyboard language is a formal language for describing movies shot by shot, where each shot is described with a unique sentence. The language uses a simple syntax and limited vocabulary borrowed from working practices in traditional movie-making and is intended to be readable both by machines and humans. The language has been designed over the last ten years to serve as a high-level user interface for intelligent cinematography and editing systems. In this new paper, we present the latest evolution of the language, and the results of an extensive annotation exercise showing the benefits of the language in the task of annotating the sophisticated cinematography and film editing of classic movies.
  • Item
    Using Advene to Bridge the Gap Between Users and Ontologies in Movie Annotation
    (The Eurographics Association, 2022) Aubert, Olivier; Ronfard, Rémi; Wu, Hui-Yin
    Feature movies and documentaries analysis has always relied on available access tools and possibilities: movie theaters required memorizing whole sequences, home video (VHS, DVD) brought new possibilities for analysis. Digital video tools now provide additional capabilities, such as video annotation, which is sometimes used in research contexts, from simple synchronized note-taking to more structured approaches. The AdA project of the Cinepoietics team of Freie Universität de Berlin aims at investigating the audiovisual rhetorics of affect in audiovisual media on the financial crisis. The analyses are framed by theoretical assumptions on the process of filmviewing, and one of the goals of the project is to study in what measure a systematic approach based on semantic annotations of the audiovisual corpus can bring a new light to the reflexions. Such an approach requires appropriate tooling for humanities researchers. We will describe in this contribution how the Advene video annotation platform has been extended and used to produce and use semantic annotations, and to validate the underlying ontology, accompanying the humanities researcher practices in the AdA project.
  • Item
    Framework to Computationally Analyze Kathakali Videos
    (The Eurographics Association, 2022) Bulani, Pratikkumar; S, Jayachandran; Sivaprasad, Sarath; Gandhi, Vineet; Ronfard, Rémi; Wu, Hui-Yin
    Kathakali is one of the major forms of Classical Indian Dance. The dance form is distinguished by the elaborately colourful makeup, costumes and face masks. In this work, we present (a) a framework to analyze the facial expressions of the actors and (b) novel visualization techniques for the same. Due to extensive makeup, costumes and masks, the general face analysis techniques fail on Kathakali videos. We present a dataset with manually annotated Kathakali sequences for four downstream tasks, i.e. face detection, background subtraction, landmark detection and face segmentation. We rely on transfer learning and fine-tune deep learning models and present qualitative and quantitative results for these tasks. Finally, we present a novel application of style-transfer of Kathakali video onto a cartoonized face. The comprehensive framework presented in the paper paves the way for better understanding, analysis, pedagogy and visualization of Kathakali videos.
  • Item
    Consistent Multi- and Single-View HDR-Image Reconstruction from Single Exposures
    (The Eurographics Association, 2022) Mohan, Aditya; Zhang, Jing; Cozot, Remi; Loscos, Celine; Ronfard, Rémi; Wu, Hui-Yin
    Recently, there have been attempts to obtain high-dynamic range (HDR) images from single exposures and efforts to reconstruct multi-view HDR images using multiple input exposures. However, there have not been any attempts to reconstruct multi-view HDR images from multi-view Single Exposures to the best of our knowledge. We present a two-step methodology to obtain color consistent multi-view HDR reconstructions from single-exposure multi-view low-dynamic-range (LDR) Images. We define a new combination of the Mean Absolute Error and Multi-Scale Structural Similarity Index loss functions to train a network to reconstruct an HDR image from an LDR one. Once trained we use this network to multi-view input. When tested on single images, the outputs achieve competitive results with the state-of-the-art. Quantitative and qualitative metrics applied to our results and to the state-of-the-art show that our HDR expansion is better than others while maintaining similar qualitative reconstruction results. We also demonstrate that applying this network on multi-view images ensures coherence throughout the generated grid of HDR images.
  • Item
    (Re-)Framing Virtual Reality
    (The Eurographics Association, 2022) Sagot-Duvauroux, Rémi; Garnier, François; Ronfard, Rémi; Ronfard, Rémi; Wu, Hui-Yin
    We address the problem of translating the rich vocabulary of cinematographic shots elaborated in classic films for use in virtual reality. Using a classic scene from Alfred Hitchcock's "North by Northwest", we describe a series of artistic experiments attempting to enter "inside the movie" in various conditions and report on the challenges facing the film director in this task. For the case of room-scale VR, we suggest that the absence of the visual frame of the screen can be usefully replaced by the spatial frame of the physical room where the experience takes place. This "re-framing" opens new directions for creative film directing in virtual reality.
  • Item
    Real-Time Music-Driven Movie Design Framework
    (The Eurographics Association, 2022) Hofmann, Sarah; Seeger, Maximilian; Rogge-Pott, Henning; Mammen, Sebastian von; Ronfard, Rémi; Wu, Hui-Yin
    Cutting to music is a widely used stylistic device in film making. The usual process involves an editor manually adjusting the movie's sequences contingent upon beat or other musical features. But with today's movie productions starting to leverage real-time systems, manual effort can be reduced. Automatic cameras can make decisions on their own according to pre-defined rules, even in real time. In this paper, we present an approach to automatically create a music video. We have realised its implementation as a coding framework integrating with the fmod api and Unreal Engine 4. The framework provides the means to analyze a music stream at runtime and to translate the extracted features into an animation story line, supported by cinematic cutting. We demonstrate its workings by means of an instance of an artistic, music-driven movie.