WICED 2017

Permanent URI for this collection

Lyon, France | April 2017

Reasoning and Knowledge
Declarative Spatial Reasoning for Intelligent Cinematography
Mehul Bhatt, Carl Schultz, Jakob Suchan, and Przemyslaw Walega
La Caméra Enchantée
Jarek Rossignac
Implementing Hitchcock - the Role of Focalization and Viewpoint
Quentin Galvane and Rémi Ronfard
CaMor: Screw Interpolation between Perspective Projections of Partial Views of Rectangular Images
Gokul Raghuraman, Nicholas Barrash, and Jarek Rossignac
Styles and Challenges
Inferring the Structure of Action Movies
Danila Potapov, Matthijs Douze, Jérôme Revaud, Zaid Harchaoui, and Cordelia Schmid
Analyzing Elements of Style in Annotated Film Clips
Hui-Yin Wu, Quentin Galvane, Christophe Lino, and Marc Christie
Five Challenges for Intelligent Cinematography and Editing
Rémi Ronfard
Live-action Cinematography
Zooming On All Actors: Automatic Focus+Context Split Screen Video Generation
Moneish Kumar, Vineet Gandhi, Rémi Ronfard, and Michael Gleicher
Pano2Vid: Automatic Cinematography for Watching 360° Videos
Yu-Chuan Su, Dinesh Jayaraman, and Kristen Grauman
A Probabilistic Logic Programming Approach to Automatic Video Montage
Bram Aerts, Toon Goedemé, and Joost Vennekens
Automatic Camera Selection and PTZ Canvas Steering for Autonomous Filming of Reality TV
Timothy Callemein, Wiebe Van Ranst, and Toon Goedemé
Human-Machine Collaborations in Film
Making Movies from Make-Believe Games
Adela Barbulescu, Maxime Garcia, Dominique Vaufreydaz, Marie Paule Cani, and Rémi Ronfard
Design of an Intelligent Navigation System for Participative Computer Animation
Iou-Shiuan Liu, Tsai-Yen Li, and Marc Christie
Using ECPs for Interactive Applications in Virtual Cinematography
Hui-Yin Wu, Tsai-Yen Li, and Marc Christie
Film Ties: A Web-based Virtual 3D Lab for Teaching the Film Art from Script to Blocking
William Bares, Caroline Requierme, and Elizabeth Obisesan

BibTeX (WICED 2017)
@inproceedings{
10.2312:wiced.20171063,
booktitle = {
Eurographics Workshop on Intelligent Cinematography and Editing},
editor = {
William Bares and Vineet Gandhi and Quentin Galvane and Remi Ronfard
}, title = {{
Declarative Spatial Reasoning for Intelligent Cinematography}},
author = {
Bhatt, Mehul
 and
Schultz, Carl
 and
Suchan, Jakob
 and
Walega, Przemyslaw
}, year = {
2017},
publisher = {
The Eurographics Association},
ISSN = {2411-9733},
ISBN = {978-3-03868-031-4},
DOI = {
10.2312/wiced.20171063}
}
@inproceedings{
10.2312:wiced.20171064,
booktitle = {
Eurographics Workshop on Intelligent Cinematography and Editing},
editor = {
William Bares and Vineet Gandhi and Quentin Galvane and Remi Ronfard
}, title = {{
La Caméra Enchantée}},
author = {
Rossignac, Jarek
}, year = {
2017},
publisher = {
The Eurographics Association},
ISSN = {2411-9733},
ISBN = {978-3-03868-031-4},
DOI = {
10.2312/wiced.20171064}
}
@inproceedings{
10.2312:wiced.20171067,
booktitle = {
Eurographics Workshop on Intelligent Cinematography and Editing},
editor = {
William Bares and Vineet Gandhi and Quentin Galvane and Remi Ronfard
}, title = {{
Inferring the Structure of Action Movies}},
author = {
Potapov, Danila
 and
Douze, Matthijs
 and
Revaud, Jérôme
 and
Harchaoui, Zaid
 and
Schmid, Cordelia
}, year = {
2017},
publisher = {
The Eurographics Association},
ISSN = {2411-9733},
ISBN = {978-3-03868-031-4},
DOI = {
10.2312/wiced.20171067}
}
@inproceedings{
10.2312:wiced.20171066,
booktitle = {
Eurographics Workshop on Intelligent Cinematography and Editing},
editor = {
William Bares and Vineet Gandhi and Quentin Galvane and Remi Ronfard
}, title = {{
CaMor: Screw Interpolation between Perspective Projections of Partial Views of Rectangular Images}},
author = {
Raghuraman, Gokul
 and
Barrash, Nicholas
 and
Rossignac, Jarek
}, year = {
2017},
publisher = {
The Eurographics Association},
ISSN = {2411-9733},
ISBN = {978-3-03868-031-4},
DOI = {
10.2312/wiced.20171066}
}
@inproceedings{
10.2312:wiced.20171065,
booktitle = {
Eurographics Workshop on Intelligent Cinematography and Editing},
editor = {
William Bares and Vineet Gandhi and Quentin Galvane and Remi Ronfard
}, title = {{
Implementing Hitchcock - the Role of Focalization and Viewpoint}},
author = {
Galvane, Quentin
 and
Ronfard, Rémi
}, year = {
2017},
publisher = {
The Eurographics Association},
ISSN = {2411-9733},
ISBN = {978-3-03868-031-4},
DOI = {
10.2312/wiced.20171065}
}
@inproceedings{
10.2312:wiced.20171068,
booktitle = {
Eurographics Workshop on Intelligent Cinematography and Editing},
editor = {
William Bares and Vineet Gandhi and Quentin Galvane and Remi Ronfard
}, title = {{
Analyzing Elements of Style in Annotated Film Clips}},
author = {
Wu, Hui-Yin
 and
Galvane, Quentin
 and
Lino, Christophe
 and
Christie, Marc
}, year = {
2017},
publisher = {
The Eurographics Association},
ISSN = {2411-9733},
ISBN = {978-3-03868-031-4},
DOI = {
10.2312/wiced.20171068}
}
@inproceedings{
10.2312:wiced.20171070,
booktitle = {
Eurographics Workshop on Intelligent Cinematography and Editing},
editor = {
William Bares and Vineet Gandhi and Quentin Galvane and Remi Ronfard
}, title = {{
Zooming On All Actors: Automatic Focus+Context Split Screen Video Generation}},
author = {
Kumar, Moneish
 and
Gandhi, Vineet
 and
Ronfard, Rémi
 and
Gleicher, Michael
}, year = {
2017},
publisher = {
The Eurographics Association},
ISSN = {2411-9733},
ISBN = {978-3-03868-031-4},
DOI = {
10.2312/wiced.20171070}
}
@inproceedings{
10.2312:wiced.20171069,
booktitle = {
Eurographics Workshop on Intelligent Cinematography and Editing},
editor = {
William Bares and Vineet Gandhi and Quentin Galvane and Remi Ronfard
}, title = {{
Five Challenges for Intelligent Cinematography and Editing}},
author = {
Ronfard, Rémi
}, year = {
2017},
publisher = {
The Eurographics Association},
ISSN = {2411-9733},
ISBN = {978-3-03868-031-4},
DOI = {
10.2312/wiced.20171069}
}
@inproceedings{
10.2312:wiced.20171073,
booktitle = {
Eurographics Workshop on Intelligent Cinematography and Editing},
editor = {
William Bares and Vineet Gandhi and Quentin Galvane and Remi Ronfard
}, title = {{
Automatic Camera Selection and PTZ Canvas Steering for Autonomous Filming of Reality TV}},
author = {
Callemein, Timothy
 and
Ranst, Wiebe Van
 and
Goedemé, Toon
}, year = {
2017},
publisher = {
The Eurographics Association},
ISSN = {2411-9733},
ISBN = {978-3-03868-031-4},
DOI = {
10.2312/wiced.20171073}
}
@inproceedings{
10.2312:wiced.20171071,
booktitle = {
Eurographics Workshop on Intelligent Cinematography and Editing},
editor = {
William Bares and Vineet Gandhi and Quentin Galvane and Remi Ronfard
}, title = {{
Pano2Vid: Automatic Cinematography for Watching 360° Videos}},
author = {
Su, Yu-Chuan
 and
Jayaraman, Dinesh
 and
Grauman, Kristen
}, year = {
2017},
publisher = {
The Eurographics Association},
ISSN = {2411-9733},
ISBN = {978-3-03868-031-4},
DOI = {
10.2312/wiced.20171071}
}
@inproceedings{
10.2312:wiced.20171072,
booktitle = {
Eurographics Workshop on Intelligent Cinematography and Editing},
editor = {
William Bares and Vineet Gandhi and Quentin Galvane and Remi Ronfard
}, title = {{
A Probabilistic Logic Programming Approach to Automatic Video Montage}},
author = {
Aerts, Bram
 and
Goedemé, Toon
 and
Vennekens, Joost
}, year = {
2017},
publisher = {
The Eurographics Association},
ISSN = {2411-9733},
ISBN = {978-3-03868-031-4},
DOI = {
10.2312/wiced.20171072}
}
@inproceedings{
10.2312:wiced.20171074,
booktitle = {
Eurographics Workshop on Intelligent Cinematography and Editing},
editor = {
William Bares and Vineet Gandhi and Quentin Galvane and Remi Ronfard
}, title = {{
Making Movies from Make-Believe Games}},
author = {
Barbulescu, Adela
 and
Garcia, Maxime
 and
Vaufreydaz, Dominique
 and
Cani, Marie Paule
 and
Ronfard, Rémi
}, year = {
2017},
publisher = {
The Eurographics Association},
ISSN = {2411-9733},
ISBN = {978-3-03868-031-4},
DOI = {
10.2312/wiced.20171074}
}
@inproceedings{
10.2312:wiced.20171075,
booktitle = {
Eurographics Workshop on Intelligent Cinematography and Editing},
editor = {
William Bares and Vineet Gandhi and Quentin Galvane and Remi Ronfard
}, title = {{
Design of an Intelligent Navigation System for Participative Computer Animation}},
author = {
Liu, Iou-Shiuan
 and
Li, Tsai-Yen
 and
Christie, Marc
}, year = {
2017},
publisher = {
The Eurographics Association},
ISSN = {2411-9733},
ISBN = {978-3-03868-031-4},
DOI = {
10.2312/wiced.20171075}
}
@inproceedings{
10.2312:wiced.20171077,
booktitle = {
Eurographics Workshop on Intelligent Cinematography and Editing},
editor = {
William Bares and Vineet Gandhi and Quentin Galvane and Remi Ronfard
}, title = {{
Film Ties: A Web-based Virtual 3D Lab for Teaching the Film Art from Script to Blocking}},
author = {
Bares, William
 and
Requierme, Caroline
 and
Obisesan, Elizabeth
}, year = {
2017},
publisher = {
The Eurographics Association},
ISSN = {2411-9733},
ISBN = {978-3-03868-031-4},
DOI = {
10.2312/wiced.20171077}
}
@inproceedings{
10.2312:wiced.20171076,
booktitle = {
Eurographics Workshop on Intelligent Cinematography and Editing},
editor = {
William Bares and Vineet Gandhi and Quentin Galvane and Remi Ronfard
}, title = {{
Using ECPs for Interactive Applications in Virtual Cinematography}},
author = {
Wu, Hui-Yin
 and
Li, Tsai-Yen
 and
Christie, Marc
}, year = {
2017},
publisher = {
The Eurographics Association},
ISSN = {2411-9733},
ISBN = {978-3-03868-031-4},
DOI = {
10.2312/wiced.20171076}
}

Browse

Recent Submissions

Now showing 1 - 16 of 16
  • Item
    Declarative Spatial Reasoning for Intelligent Cinematography
    (The Eurographics Association, 2017) Bhatt, Mehul; Schultz, Carl; Suchan, Jakob; Walega, Przemyslaw; William Bares and Vineet Gandhi and Quentin Galvane and Remi Ronfard
    We present computational visuo-spatial representation and reasoning from the viewpoint of the research areas of artificial intelligence, spatial cognition and computation, and human-computer interaction. The particular focus is on demonstrating recent advances in the theory and practice of spatial reasoning, and its significance and potential as a foundational AI method for (intelligent) computational cinematography & editing systems.
  • Item
    WICED 2017: Frontmatter
    (Eurographics Association, 2017) Bares, William; Gandhi, Vineet; Galvane, Quentin; Ronfard, Rémi;
  • Item
    La Caméra Enchantée
    (The Eurographics Association, 2017) Rossignac, Jarek; William Bares and Vineet Gandhi and Quentin Galvane and Remi Ronfard
    A rich set of tools have been developed for designing and animating camera motions. Most of them optimize some geometric measure while satisfying a set of geometric constraints. Others strive to provide an intuitive graphical user interface for manipulating the camera motion or the key poses that control it. We will start by reviewing examples of such tools developed by the speaker and his collaborators and students. These include a 6 DoF GUI for moving a MiniCam over a floor plan of the set, arguing the benefits of Screw Motions for interpolation key poses, using HelBender to smoothen piecewise helical interpolating motions, and controlling the camera by moving on the screen the location of feature points tracked by the camera, and scene graph extensions that support smooth transitions between tracked objects. Then, we will ask harder questions: What is the best way for the user to specify the objectives, the constraints, and the camera motion style? How do we define and program such a style? Is the objective to make the motion so natural that it is not noticed by the viewer or is should we strive to support aesthetic qualities and artistic camera actions? And finally, how do we define and program responsive camera behaviors for interactive environments? Author's prior publications referenced in the talk include: [SBM 95], [RK01], [KR03], [PR05], [RKS 07], [PR08], [RS08], [RV11], [RK12], [RLV12].
  • Item
    Inferring the Structure of Action Movies
    (The Eurographics Association, 2017) Potapov, Danila; Douze, Matthijs; Revaud, Jérôme; Harchaoui, Zaid; Schmid, Cordelia; William Bares and Vineet Gandhi and Quentin Galvane and Remi Ronfard
    While important advances were recently made towards temporally localizing and recognizing specific human actions or activities in videos, efficient detection and classification of long video chunks belonging to semantically-defined categories remains challenging. Examples of such categories can be found in action movies, whose storylines often follow a standardized structure corresponding to a sequence of typical segments such as ''pursuit'', ''romance'', etc. We introduce a new dataset, Action Movie Franchises, consisting of a collection of Hollywood action movie franchises. We define 11 non-exclusive semantic categories that are broad enough to cover most of the movie footage. The corresponding events are annotated as groups of video shots, possibly overlapping. We propose an approach for localizing events based on classifying shots into categories and learning the temporal constraints between shots. We show that temporal constraints significantly improve the classification performance. We set up an evaluation protocol for event localization as well as for shot classification, depending on whether movies from the same franchise are present or not in the training data.
  • Item
    CaMor: Screw Interpolation between Perspective Projections of Partial Views of Rectangular Images
    (The Eurographics Association, 2017) Raghuraman, Gokul; Barrash, Nicholas; Rossignac, Jarek; William Bares and Vineet Gandhi and Quentin Galvane and Remi Ronfard
    CaMor is a tool for generating an animation from a single drawing or photograph that represents a partial view of a perspective projection of a planar shape or image that contains portions of only 3 edges of an unknown rectangle. The user identifies these portions and indicates where the corresponding lines should be at the end of the animation. CaMor produces a non-affine animation of the entire plane by combining (1) a new rectification procedure that identifies the orientation in 3D of a rectangle from the partial image of its perspective projection, (2) a depth adjustment that ensures that the two rectified rectangles are congruent in 3D, (3) a screw motion that interpolates in 3D between the two congruent shapes, and (4) at each frame, a perspective projection of a user-selected portion of the original image. The animation may be modified interactively by adjusting the final positions of the lines or the focal length. We suggest applications to the animation of hand-drawn scenes, to the morph between two photographs, and to the intuitive design of camera motions for indoor and street scenes.
  • Item
    Implementing Hitchcock - the Role of Focalization and Viewpoint
    (The Eurographics Association, 2017) Galvane, Quentin; Ronfard, Rémi; William Bares and Vineet Gandhi and Quentin Galvane and Remi Ronfard
    Focalization and viewpoint are important aspects of narrative movie-making that need to be taken into account by cinematography and editing. In this paper, we argue that viewpoint can be determined from the first principles of focalization in the screenplay and adherence to a slightly modified version of Hitchcock's rule in cinematography and editing. With minor changes to previous work in automatic cinematography and editing, we show that this strategy makes it possible to easily control the viewpoint in the movie by rewriting and annotating the screenplay. We illustrate our claim with four versions of a moderately complex movie scene obtained by focalizing on its four main characters, with dramatically different camera choices.
  • Item
    Analyzing Elements of Style in Annotated Film Clips
    (The Eurographics Association, 2017) Wu, Hui-Yin; Galvane, Quentin; Lino, Christophe; Christie, Marc; William Bares and Vineet Gandhi and Quentin Galvane and Remi Ronfard
    This paper presents an open database of annotated film clips together with an analysis of elements of film style related to how the shots are composed, how the transitions are performed between shots and how the shots are sequenced to compose a film unit. The purpose is to initiate a shared repository pertaining to elements of film style which can be used by computer scientists and film analysts alike. Though both research communities rely strongly on the availability of such information to foster their findings, current databases are either limited to low-level features (such as shots lengths, color and luminance information), contain noisy data, or are not available to the communities. The data and analysis we provide open exciting perspectives as to how computational approaches can rely more thoroughly on information and knowledge extracted from existing movies, and also provide a better understanding of how elements of style are arranged to construct a consistent message.
  • Item
    Zooming On All Actors: Automatic Focus+Context Split Screen Video Generation
    (The Eurographics Association, 2017) Kumar, Moneish; Gandhi, Vineet; Ronfard, Rémi; Gleicher, Michael; William Bares and Vineet Gandhi and Quentin Galvane and Remi Ronfard
    Stage performances can be easily captured using a high resolution camera but these are often difficult to watch because actor faces are too small. We present a novel approach to create a split screen video that incorporates both the context as well as the close-up details of the actors. Our system takes as input the static recording of a stage performance and tracking information about the actor positions, and generates a video with a wide master shot and a set of close-ups of all identified actors and hence showing a focus+context view that shows both the overall action as well as the details of actor faces. The key to our approach is to compute these camera motions such that they are cinematically valid close-ups and to ensure that the set of views of the different actors are properly coordinated and presented. The close-up views are created as virtual camera movements by applying panning, cropping and zooming to the source video. We pose the computation of camera motions as convex optimization that creates detailed views and smooth movements, subject to cinematic constraints such as not cutting faces with the edge of the frame. Additional constraints allow for the interaction amongst the close up views of each actor, causing them to merge seamlessly when actors are close. Generated views are then placed in a layout that preserves the spatial relationships between actors. We demonstrate our results on a variety of video sequences from theatre and dance performances.
  • Item
    Five Challenges for Intelligent Cinematography and Editing
    (The Eurographics Association, 2017) Ronfard, Rémi; William Bares and Vineet Gandhi and Quentin Galvane and Remi Ronfard
    In this position paper, we propose five challenges for advancing the state of the art in intelligent cinematography and editing by taking advantage of the huge quantity of cinematographic data (movies) and metadata (movie scripts) available in digital formats. This suggests a data-driven approach to intelligent cinematography and editing, with at least five scientific bottlenecks that need to be carefully analyzed and resolved.we briefly describe them and suggest some possible avenues for future research in each of those new directions.
  • Item
    Automatic Camera Selection and PTZ Canvas Steering for Autonomous Filming of Reality TV
    (The Eurographics Association, 2017) Callemein, Timothy; Ranst, Wiebe Van; Goedemé, Toon; William Bares and Vineet Gandhi and Quentin Galvane and Remi Ronfard
    Reality TV shows that follow people in their day-to-day lives are not a new concept. However, the traditional methods used in the industry require a lot of manual labor and need the presence of at least one physical camera man. Because of this, the subjects tend to behave differently when they are aware of being recorded. This paper presents an approach to follow people in their day-to-day lives, for long periods of time (months to years), while being as unobtrusive as possible. To do this, we use unmanned cinematographically-aware cameras hidden in people's houses. Our contribution in this paper is twofold: First, we create a system to limit the amount of recorded data by intelligently controlling a video switch matrix, in combination with a multi-channel recorder. Second, we create a virtual camera man by controlling a PTZ camera to automatically make cinematographically pleasing shots. Throughout this paper, we worked closely with a real camera crew, enabling us to compare the results of our system to the work of trained professionals. This work was originally published in MVA 2017, as T. Callemein, W. Van Ranst and T. Goedemé, "The Autonomous hidden Camera Crew".
  • Item
    Pano2Vid: Automatic Cinematography for Watching 360° Videos
    (The Eurographics Association, 2017) Su, Yu-Chuan; Jayaraman, Dinesh; Grauman, Kristen; William Bares and Vineet Gandhi and Quentin Galvane and Remi Ronfard
    We introduce the novel task of Pano2Vid --- automatic cinematography in panoramic 360° videos. Given a 360° video, the goal is to direct an imaginary camera to virtually capture natural-looking normal field-of-view (NFOV) video. By selecting "where to look" within the panorama at each time step, Pano2Vid aims to free both the videographer and the end viewer from the task of determining what to watch. Towards this goal, we first compile a dataset of 360° videos downloaded from the web, together with human-edited NFOV camera trajectories to facilitate evaluation. Next, we propose AutoCam, a data-driven approach to solve the Pano2Vid task. AutoCam leverages NFOV web video to discriminatively identify space-time "glimpses" of interest at each time instant, and then uses dynamic programming to select optimal human-like camera trajectories. Through experimental evaluation on multiple newly defined Pano2Vid performance measures against several baselines, we show that our method successfully produces informative videos that could conceivably have been captured by human videographers.
  • Item
    A Probabilistic Logic Programming Approach to Automatic Video Montage
    (The Eurographics Association, 2017) Aerts, Bram; Goedemé, Toon; Vennekens, Joost; William Bares and Vineet Gandhi and Quentin Galvane and Remi Ronfard
    Hiring a professional camera crew to cover an event such as a lecture, sports game or musical performance may be prohibitively expensive. The CAMETRON project aims at drastically reducing this cost by developing an (almost) fully automated system that can produce video recordings of such events with a quality similar to that of a professional crew. This system consists of different components, including intelligent Pan-Tilt-Zoom cameras and UAVs that act as ''virtual camera men''. To combine the footage of these different cameras into a single coherent and pleasant-to-watch video, a ''virtual editor'' is needed. Human editors typically follow a number of different-and sometimes contradictory-cinematographic ''rules'' to accomplish this task. To develop our virtual editor, we will follow a declarative approach, in which we explicitly represent these rules. This approach has the benefit that it offers a great deal of flexibility in deciding which rules should be taken into account and how they should take priority over each other. It also allows us to reuse the same knowledge to perform different tasks: we cannot only use the rules to generate a montage, but also to evaluate the quality of a given montage or to learn certain properties of good montages from given examples. To represent the rules, we need a suitable knowledge representation language. A particular challenge is that cinematographic rules are not strict: they are guidelines that are typically followed, but not always. Indeed, the rules may sometimes contradict each other, and even if they do not, a human editor may still choose to ignore a rule, simply because the result ''feels'' better. A virtual editor should therefore not rigidly follow the rules, but it should sometimes deviate from them in order to give the montage a more interesting and natural flavour, thereby mimicking the creativity of a human editor. For this reason, we have chosen to make use of the Probabilistic Logic Programming language CP-logic and its implementation in the Problog system, which allows us to represent these rules in a non-deterministic way. This has the additional benefit that-just like a human editor-the system is able to produce different montages from the same input streams. The proposed editing system takes as input a number of different video streams, together with a computer vision analysis of each of these streams. For each frame in each stream, we expect this analysis to provide information, such as the presence of people, the type of shot and the action the main subject performs. The goal of our editing system is to decide for each point in time which of the available camera feeds will be used. Those decisions are made based on the cinematographic model described in the paper. The output of the system is the single video stream that is thus constructed. We demonstrate that the resulting system is able to produce real-time edits of different video streams. In order to verify the quality of the resulting montage, we subjected the virtual editor to a ''Turing test'': we asked 58 test subjects to distinguish between the output of our system and a professionally made montage of the same video streams. 31 subjects correctly identified the professionally edited clip. The difference between this outcome and one that could be produced by random guessing is not statistically significant. We conclude that our editing system indeed provides a good approximation of the quality delivered by a professional editor for this particular case study of lecture recording.
  • Item
    Making Movies from Make-Believe Games
    (The Eurographics Association, 2017) Barbulescu, Adela; Garcia, Maxime; Vaufreydaz, Dominique; Cani, Marie Paule; Ronfard, Rémi; William Bares and Vineet Gandhi and Quentin Galvane and Remi Ronfard
    Pretend play is a storytelling technique, naturally used from very young ages, which relies on object substitution to represent the characters of the imagined story. We propose "Make-believe", a system for making movies from pretend play by using 3D printed figurines as props. We capture the rigid motions of the figurines and the gestures and facial expressions of the storyteller using Kinect cameras and IMU sensors and transfer them to the virtual story-world. As a proof-of-concept, we demonstrate our system with an improvised story involving a prince and a witch, which was successfully recorded and transferred into 3D animation.
  • Item
    Design of an Intelligent Navigation System for Participative Computer Animation
    (The Eurographics Association, 2017) Liu, Iou-Shiuan; Li, Tsai-Yen; Christie, Marc; William Bares and Vineet Gandhi and Quentin Galvane and Remi Ronfard
    In this paper, we will propose a novel way of interactive entertainment, called Participative Computer Animation, allowing a user to participate in a computer animated story as an observer. We consider this form of entertainment as a kind of interactive storytelling, in which the presentation and perception of the story is under the control of a user through a first-person camera. As the animation of the story unfolds, the user needs to follow and view the relevant events, a complex task which requires him to navigate in the 3D environment, and hence reduce his immersion. We therefore propose to design an intelligent navigation mechanism, in which the system can voluntarily assist the user in reaching some designated best view configurations under time constraint. We have implemented such a system and invited a few users in a pilot study to evaluate the system and provide feedback. The experimental results show that our participative computer animation system can enhance the sense of presence while the intelligent navigation mechanism can improve the quality of perceiving the animated story.
  • Item
    Film Ties: A Web-based Virtual 3D Lab for Teaching the Film Art from Script to Blocking
    (The Eurographics Association, 2017) Bares, William; Requierme, Caroline; Obisesan, Elizabeth; William Bares and Vineet Gandhi and Quentin Galvane and Remi Ronfard
    Film production education programs include hands-on training in script writing, planning blocking of performers and cameras on set, camera operation, editing to select the best footage at each beat of the story, and expert critiques to help students improve their work. Unfortunately, this ideal form of active, hands-on learning for film production requires access to specialized equipment, movie sets, and expert film instructors. Complementary film studies education programs teach how to read the visual language of a film to breakdown each shot to understand how and why it works. Both film production and theory education involve a social component in which students collectively screen, critique, and breakdown shots seen in the films. This short paper presents work in progress to develop a Web-based virtual 3D lab, which can be used to simulate the central learning activities found in film production and film studies educational programs. The system can also be used to crowd source annotated corpora of film, which would serve as a resource for film scholars and machine-learning algorithms.
  • Item
    Using ECPs for Interactive Applications in Virtual Cinematography
    (The Eurographics Association, 2017) Wu, Hui-Yin; Li, Tsai-Yen; Christie, Marc; William Bares and Vineet Gandhi and Quentin Galvane and Remi Ronfard
    This paper introduces an interactive application of our previous work on the Patterns language as creative assistant for editing cameras in 3D virtual environments. Patterns is a set of vocabulary, which was inspired by professional film practice and textbook terminology. The vocabulary allows one to define recurrent stylistic constraints on a sequence of shots, which we term ''embedded constraint pattern'' (ECP). In our previous work, we proposed a solver that allows us to search for occurrences of ECPs in annotated data, and showed its use in automated analysis of story and emotional elements of film. This work implements a new solver that interactively propose framing compositions from an annotated database of framings that conform to the user-applied ECPs. We envision this work to be incorporated into tools and interfaces for 3D environments in the context of film pre-visualisation, film or digital arts education, video games, and other related applications in film and multimedia.