WICED: Eurographics Workshop on Intelligent Cinematography and Editing
Permanent URI for this community
Browse
Browsing WICED: Eurographics Workshop on Intelligent Cinematography and Editing by Issue Date
Now showing 1 - 20 of 52
Results Per Page
Sort Options
Item The Influence of a Moving Camera on the Perception of Distances between Moving Objects(The Eurographics Association, 2015) Garsoffky, Bärbel; Meilinger, Tobias; Horeis, Chantal; Schwan, Stephan; W. Bares and M. Christie and R. RonfardMovies and especially animations, where cameras can move nearly without any restriction, often use moving cameras, thereby intensifying continuity [Bor02] and influencing the impression of cinematic space [Jon07]. Further studies effectively use moving cameras to explore perception and processing of real world action [HUGG14]. But what is the influence of simultaneous multiple movements of actors and camera on basic perception and understanding of film sequences? It seems reasonable to expect that understanding of object movement is easiest from a static viewpoint, but that nevertheless moving viewpoints can be partialed out during perception.Item A Computational Framework for Vertical Video Editing(The Eurographics Association, 2015) Gandhi, Vineet; Ronfard, Rémi; W. Bares and M. Christie and R. RonfardVertical video editing is the process of digitally editing the image within the frame as opposed to horizontal video editing, which arranges the shots along a timeline. Vertical editing can be a time-consuming and error-prone process when using manual key-framing and simple interpolation. In this paper, we present a general framework for automatically computing a variety of cinematically plausible shots from a single input video suitable to the special case of live performances. Drawing on working practices in traditional cinematography, the system acts as a virtual camera assistant to the film editor, who can call novel shots in the edit room with a combination of high-level instructions and manually selected keyframes.Item Designing Computer Based Archaeological 3D-Reconstructions: How Camera Zoom Influences Attention(The Eurographics Association, 2015) Glaser, Manuela; Lengyel, Dominik; Toulouse, Catherine; Schwan, Stephan; W. Bares and M. Christie and R. RonfardPrevious empirical literature [Sal94; WB00] indicates that there is an attention guiding effect of zooms. In order to substantiate this conclusion, an eye-tracking study was conducted to examine the influence of camera zoom on attention processes of the recipients.Item Insight: An Annotation Tool and Format for Film Analysis(The Eurographics Association, 2015) Merabti, Billal; Wu, Hui-Yin; Sanokho, Cunka Bassirou; Galvane, Quentin; Lino, Christophe; Christie, Marc; W. Bares and M. Christie and R. RonfardA multitude of annotation tools and annotation formats for the analysis of audiovisual documents are available and address a wide range of tasks (see Elan [SW08] and Anvil [Kip10]). Existing approaches however remain restricted in the visual elements they can annotate on the screen: for example annotating on-screen arrangements of characters so as to evaluate continuity editing over shots is problematic with most annotation tools and languages. In this paper, we propose an annotation language broadly suited to analytical and generative cinematography systems. The language observes the axes of timing, spatial composition, hierarchical film structure, and link to contextual elements in the film in a way that extends previous contributions.Item Visibility-Aware Framing for 3D Modelers(The Eurographics Association, 2015) Ranon, Roberto; Christie, Marc; W. Bares and M. Christie and R. RonfardModelling and editing entire 3D scenes is a fairly complex task. The process generally comprises many individual operations such as selecting a target object, and iterating over changes in the view and changes of the object's properties such as location, shape, or material. To assist the stage of viewing the selected target, 3D modellers propose some automated framing techniques. Most have in common the ability to translate the camera so that the target is framed in the center of the viewport and has a given size on the screen. However, the visibility of the target is never taken into account, thereby leaving the task of selecting an unoccluded view to the user, a process that shows to be time-consuming in cluttered environments. In this paper, we propose to address this issue by first analyzing the requirements for an automated framing technique with a central focus on visibility. We then propose an automated framing technique that relies on particle swarm optimization, and implement it inside Unity 4 Editor. Early evaluations demonstrate the benefits of the technique over the corresponding standard Unity function, and trigger interesting perspectives in improving a simple yet mandatory feature of any 3D modelling tool.Item Implementing Game Cinematography: Technical Challenges and Solutions for Automatic Camera Control in Games(The Eurographics Association, 2015) Burelli, Paolo; W. Bares and M. Christie and R. RonfardCinematographic games are a rising genre in the computer games industry and an increasing number of titles published include some aspects of cinematography in the gameplay or the storytelling. At present state, camera handling in computer games is managed primarily through custom scripts and animations and there is an inverse relationship between player freedom and cinematographic quality. In this paper, we present a description of a series of technical challenges connected with the development of an automatic camera control library for computer games and we showcase a set of algorithmic and engineering solutions.Item Film Ties: An Architecture for Collaborative Data-driven Cinematography(The Eurographics Association, 2015) Bares, William; Schwartz, Donald; Segundo, Cristovam; Nitya, Santoshi; Aiken, Sydney; Medbery, Clinton; W. Bares and M. Christie and R. RonfardThe ability to store, share, and re-use digital assets is of primary importance in the film production pipeline. Digital assets typically include texture images, three-dimensional models, and scripts for creating special effects such as water or explosions. Despite the growing use of virtual cinematography in the film production pipeline, existing tools to manage digital assets are unable to harness the knowledge inherent in the creative composition and editing decisions made by cinematographers. This work introduces Film Ties, a new form of visual communication in which a first artist creates a virtual camera composition whose visual composition properties are stored into a database allowing other artists to adopt that composition for use in their own different virtual scenes. On adopting a composition, the system computes a comparable composition of subjects situated in a second virtual environment. Artists can post comments on shared compositions using text or propose additional compositions that adapt or improve upon the original composition. The stored compositions and suggested edits also serve as an educational resource for junior filmmakers and film students.Item Computer Generation of Filmic Discourse from a Cognitive/Affective Perspective(The Eurographics Association, 2015) Bateman, John; Christie, Marc; Ranon, Roberto; Ronfard, Remi; Smith, Tim; W. Bares and M. Christie and R. RonfardIn this position paper, we argue that advances in intelligent cinematography require better models of the multimodal structure of filmic discourse, and of the inferences made by an audience while films are being watched. Such questions have been addressed by film scholars and cognitive scientists in the past, but their models have not so far had sufficient impact on the intelligent cinematography community. In the future, this community should become more interested in understanding how cinematography and editing affect the movie in the audience's mind. At the same time, such frameworks can help researchers in computer graphics use computer simulations to build experiments in film cognition and test hypotheses in film theory.Item Toward More Effective Viewpoint Computation Tools(The Eurographics Association, 2015) Lino, Christophe; W. Bares and M. Christie and R. RonfardThe proper setting of cameras is an essential component in many 3D computer graphics applications. Commonly, viewpoint computation tools rely on the specification of visual criteria on a number of targets, each expressed as a constraint; then on the use of an optimization-based technique to compute a 7-degrees of freedom camera setting that best satisfy this set of constraints. Proposed methods can be evaluated in terms of their efficiency (required computation time), but there is a clear lack of a proper evaluation of their effectiveness (how aesthetically satisfactory the generated viewpoints are). In parallel, current methods rely on the maximization of a single fitness function built as a weighted sum (i.e. a pure tradeoff) over the satisfaction of each single criterion considered independently from all others. In contrast, cinematographers' sense of the effective satisfaction of a viewpoint is far from a tradeoff between visual criteria. These issues call for the provision of means to better evaluate the overall satisfaction of a composition problem, and methods to improve the search of a satisfactory viewpoint. In this paper, we present a work in progress which targets to steer computation tools in this direction. We first propose a range of aggregation functions which supplement the classical tradeoff function, and enable to express evolved relationships between criteria. We then propose to aggregate the individual satisfactions of criteria in hierarchical way instead of simply summing them. We finally propose to reduce the search to camera positions (i.e. from 7D to 3D), while constraining the framing more strongly by separately optimizing its orientation and focal length.Item Efficient Salient Foreground Detection for Images and Video using Fiedler Vectors(The Eurographics Association, 2015) Perazzi, Federico; Sorkine-Hornung, Olga; Sorkine-Hornung, Alexander; W. Bares and M. Christie and R. RonfardAutomatic detection of salient image regions is a useful tool with applications in intelligent camera control, virtual cinematography, video summarization and editing, evaluation of viewer preferences, and many others. This paper presents an effective method for detecting potentially salient foreground regions. Salient regions are identified by eigenvalue analysis of a graph Laplacian that is defined over the color similarity of image superpixels, under the assumption that the majority of pixels on the image boundary show non-salient background. In contrast to previous methods based on graph-cuts or graph partitioning, our method provides continuously-valued saliency estimates with complementary properties to recently proposed color contrast-based approaches. Moreover, exploiting discriminative properties of the Fiedler vector, we devise an SVM-based classifier that allows us to determine whether an image contains any salient objects at all, a problem that has been largely neglected in previous works. We also describe how the per-frame saliency detection can be extended to improve its spatiotemporal coherence when computed on video sequences. Extensive evaluation on several datasets demonstrates and validates the state-of-the-art performance of the proposed method.Item Comparing Film-editing(The Eurographics Association, 2015) Galvane, Quentin; Ronfard, Rémi; Christie, Marc; W. Bares and M. Christie and R. RonfardThrough a precise 3D animated reconstruction of a key scene in the movie ""Back to the Future"" directed by Robert Zemekis, we are able to make a detailed comparison of two very different versions of editing. The first version closely follows film editor Arthur Schmidt original sequence of shots cut in the movie. The second version is automatically generated using our recent algorithm [GRLC15] using the same choice of cameras. A shot-by-shot and cut-by-cut comparison demonstrates that our algorithm provides a remarkably pleasant and valid solution, even in such a rich narrative context, which differs significantly from the original version more than 60% of the time. Our explanation is that our version avoids stylistic effects while the original version favors such effects and uses them effectively. As a result, we suggest that our algorithm can be thought of as a baseline (""film-editing zero degree"") for future work on film-editing style.Item Frontmatter Eurographics Workshop on Intelligent Cinematography and Editing(The Eurographics Association, 2015) Ronfard, Rémi; Christie, Marc; Bares, William; W. Bares and M. Christie and R. RonfardItem Stylistic Patterns for Generating Cinematographic Sequences(The Eurographics Association, 2015) Wu, Hui-Yin; Christie, Marc; W. Bares and M. Christie and R. RonfardFor film editors, the decision of how to compose and sequence camera framings is a question pertaining to a number of elements involving the semantics of shots, framings, story context, consistency of style, and artistic value. AI systems have brought a number of techniques to create procedural generative systems for game animation and narrative content. However, due to its computational complexity, current automated cinematography relies heavily on constraint and rule-based systems, or pre-calculated camera positions and movements that implement well-known idioms from traditional cinematography. Existing dynamic systems only have limited reaction to complex story content and cannot bring affective emotional depth to the scenario. Yet in actual filmmaking, directors often employ camera techniques, which are arrangements of shots and framings, to convey multiple levels of meanings in a sequence. In this paper we propose a language for defining high-level camera styles called Patterns, which can express the aesthetic properties of framing and shot sequencing, and of camera techniques used by real directors. Patterns can be seen as the semantics of camera transitions from one frame to another. The language takes an editors view of on-screen aesthetic properties: the size, orientation, relative position, and movement of actors and objects across a number of shots. We illustrate this language through a number of examples and demonstrations. Combined with camera placement algorithms, we demonstrate the language's capacity to create complex shot sequences in data-driven generative systems for 3D storytelling applications.Item Key-frame Based Spatiotemporal Scribble Propagation(The Eurographics Association, 2015) Dogan, Pelin; Aydın, Tunç Ozan; Stefanoski, Nikolce; Smolic, Aljoscha; W. Bares and M. Christie and R. RonfardWe present a practical, key-frame based scribble propagation framework. Our method builds upon recent advances in spatiotemporal filtering by adding key-components required for achieving seamless temporal propagation. To that end, we propose a temporal propagation scheme for eliminating holes in regions where no motion path reaches reliably. Additionally, to facilitate the practical use of our technique we formulate a pair of image edge metrics influenced from the body of work on edge-aware filtering, and introduce the ""hybrid scribble propagation"" concept where each scribble's propagation can be controlled by user defined edge stopping criteria. Our method improves the current state-of-the-art in the quality of propagation results and in terms of memory complexity. Importantly, our method operates on a limited, user defined temporal window and therefore has a constant memory complexity (instead of linear) and thus scales to arbitrary length videos. The quality of our propagation results is demonstrated for various video processing applications such as mixed HDR video tone mapping, artificial depth of field for video and local video recoloring.Item Contact Visualization(The Eurographics Association, 2016) Marvie, Jean-Eudes; Sourimant, Gael; Dufay, A.; M. Christie and Q. Galvane and A. Jhala and R. RonfardWe present in this paper a production-oriented technique designed to visualize contact in real-time between 3D objects. The motivation of this work is to provide integrated tools in the production workflow that help artists setting-up scenes and assets without undesired floating objects or inter-penetrations. Such issues can occur easily and remain unnoticed until shading and/or lighting stages are set-up, leading to retakes of the modeling or animation stages. With our solution, artists can visualize in real-time contact between 3D objects while setting-up their assets, thus correcting earlier such misalignments. Being based on a cheap post-processing shader, our solution can be used even on low-end GPUs.Item Introducing Basic Principles of Haptic Cinematography and Editing(The Eurographics Association, 2016) Guillotel, Philippe; Danieau, Fabien; Fleureau, Julien; Rouxel, Ines; M. Christie and Q. Galvane and A. Jhala and R. RonfardAdding the sense of touch to hearing and seeing would be necessary for a true immersive experience. This is the promise of the growing "4D-cinema" based on motion platforms and others sensory effects (water spray, wind, scent, etc.). Touch provides a new dimension for filmmakers and leads to a new creative area, the haptic cinematography. However design rules are required to use this sensorial modality in the right way for increasing the user experience. This paper addresses this issue, by introducing principles of haptic cinematography editing. The proposed elements are based on early feedback from different creative works performed by the authors (including a student in cinema arts), anticipating the role of haptographers, the experts on haptic content creation. Three full short movies have been augmented with haptic feedback and tested by numerous users, in order to provide the inputs for this introductory paper.Item Analysing Cinematography with Embedded Constrained Patterns(The Eurographics Association, 2016) Wu, Hui-Yin; Christie, Marc; M. Christie and Q. Galvane and A. Jhala and R. RonfardCinematography carries messages on the plot, emotion, or more general feeling of the film. Yet cinematographic devices are often overlooked in existing approaches to film analysis. In this paper, we present Embedded Constrained Patterns (ECPs), a dedicated query language to search annotated film clips for sequences that fulfill complex stylistic constraints. ECPs are groups of framing and sequencing constraints defined using vocabulary in film textbooks. Using a set algorithm, all occurrences of the ECPs can be found in annotated film sequences. We use a film clip from the Lord of the Rings to demonstrate a range of ECPs that can be detected, and analyse them in relation to story and emotions in the film.Item Automatic Lighting Design from Photographic Rules(The Eurographics Association, 2016) Wambecke, Jérémy; Vergne, Romain; Bonneau, Georges-Pierre; Thollot, Joëlle; M. Christie and Q. Galvane and A. Jhala and R. RonfardLighting design is crucial in 3D scenes modeling for its ability to provide cues to understand the objects shape. However a lot of time, skills, trials and errors are required to obtain a desired result. Existing automatic lighting methods for conveying the shape of 3D objects are based either on costly optimizations or on non-realistic shading effects. Also they do not take the material information into account. In this paper, we propose a new method that automatically suggests a lighting setup to reveal the shape of a 3D model, taking into account its material and its geometric properties. Our method is independent from the rendering algorithm. It is based on lighting rules extracted from photography books, applied through a fast and simple geometric analysis. We illustrate our algorithm on objects having different shapes and materials, and we show by both visual and metric evaluation that it is comparable to optimization methods in terms of lighting setups quality. Thanks to its genericity our algorithm could be integrated in any rendering pipeline to suggest appropriate lighting.Item WICED 2016: Frontmatter(Eurographics Association, 2016) Rémi Ronfard; Marc Christie; Quentin Galvane; Arnav Jhala;Item Automated Cinematography with Unmanned Aerial Vehicles(The Eurographics Association, 2016) Galvane, Quentin; Fleureau, Julien; Tariolle, Francois-Louis; Guillotel, Philippe; M. Christie and Q. Galvane and A. Jhala and R. RonfardThe rise of Unmanned Aerial Vehicles and their increasing use in the cinema industry calls for the creation of dedicated tools. Though there is a range of techniques to automatically control drones for a variety of applications, none have considered the problem of producing cinematographic camera motion in real-time for shooting purposes. In this paper we present our approach to UAV navigation for autonomous cinematography. The contributions of this research are twofold: (i) we adapt virtual camera control techniques to UAV navigation; (ii) we introduce a drone-independent platform for high-level user interactions that integrates cinematographic knowledge. The results presented in this paper demonstrate the capacities of our tool to capture live movie scenes involving one or two moving actors.
- «
- 1 (current)
- 2
- 3
- »