Browsing by Author "Kiyokawa, Kiyoshi"
Now showing 1 - 5 of 5
Results Per Page
Sort Options
Item 3D-Aware Image Relighting with Object Removal from Single Image(The Eurographics Association, 2022) Zhang, Yujia; Perusquia-Hernández, Monica; Isoyama, Naoya; Kawai, Norihiko; Uchiyama, Hideaki; Sakata, Nobuchika; Kiyokawa, Kiyoshi; Theophilus Teo; Ryota KondoWe propose a method to relight scenes in a single image while removing unwanted objects by the combination of 3D-aware inpainting and relighting for a new functionality in image editing. First, the proposed method estimates the depth image from an RGB image using single-view depth estimation. Next, the RGB and depth images are masked by the user by specifying unwanted objects. Then, the masked RGB and depth images are simultaneously inpainted by our proposed neural network. For relighiting, a 3D mesh model is first reconstructed from the inpainted depth image, and is then relit with a standard relighting pipeline. In this process, removing cast shadows and sky areas and albedo estimation are optionally performed to suppress the artifacts in outdoor scenes. Through these processes, various types of relighting can be achieved from a single photograph while excluding the colors and shapes of unwanted objects.Item A Data Collection Protocol, Tool and Analysis for the Mapping of Speech Volume to Avatar Facial Animation(The Eurographics Association, 2022) Miyawaki, Ryosuke; Perusquia-Hernandez, Monica; Isoyama, Naoya; Uchiyama, Hideaki; Kiyokawa, Kiyoshi; Hideaki Uchiyama; Jean-Marie NormandKnowing the relationship between speech-related facial movement and speech is important for avatar animation. Accurate facial displays are necessary to convey perceptual speech characteristics fully. Recently, an effort has been made to infer the relationship between facial movement and speech with data-driven methodologies using computer vision. To this aim, we propose to use blendshape-based facial movement tracking, because it can be easily translated to avatar movement. Furthermore, we present a protocol for audio-visual and behavioral data collection and a tool running on WEB that aids in collecting and synchronizing data. As a start, we provide a database of six Japanese participants reading emotion-related scripts at different volume levels. Using this methodology, we found a relationship between speech volume and facial movement around the nose, cheek, mouth, and head pitch. We hope that our protocols, WEB-based tool, and collected data will be useful for other scientists to derive models for avatar animation.Item Evaluation of Embodied Agent Positioning and Moving Interfaces for an AR Virtual Guide(The Eurographics Association, 2019) Techasarntikul, Nattaon; Ratsamee, Photchara; Orlosky, Jason; Mashita, Tomohiro; Uranishi, Yuki; Kiyokawa, Kiyoshi; Takemura, Haruo; Kakehi, Yasuaki and Hiyama, AtsushiAugmented Reality (AR) has become a popular technology in museums, and many venues now provide AR applications inside gallery spaces. To improve museum tour experiences, we have developed an embodied agent AR guide system that aims to explain multi-section detailed information hidden in the painting. In this paper, we investigate the effect of different types of guiding interfaces that use this type of embodied agent when explaining large scale artwork. Our interfaces include two types of guiding positions: inside and outside the artwork area, and two types of agent movements: teleporting and flying. To test these interfaces, we conducted a within-subjects experiment to test Inside-Teleport, Inside-Flying, Outside-Teleport, and Outside- Flying with 28 participants. Results indicated that although the Inside-Flying interface often obstructed the painting, most of the participants preferred this type since it was perceived as natural and helped users find corresponding art details more easily.Item Identifying Language-induced Mental Load from Eye Behaviors in Virtual Reality(The Eurographics Association, 2022) Schirm, Johannes; Perusquia-Hernández, Monica; Isoyama, Naoya; Uchiyama, Hideaki; Kiyokawa, Kiyoshi; Theophilus Teo; Ryota KondoWe compared content-independent eye tracking metrics under different levels of language-induced mental load in virtual reality (VR). We designed a virtual environment to balance consistent recording of eye data with user experience and freedom of action. We also took steps towards quantifying the phenomenon of not focusing exactly on surfaces by proposing ''focus offset'' as a VR-compatible eye metric. Responses to conditions with higher mental load included larger and more variable pupil sizes and less fixations. We also observed less voluntary gazing at distraction content, and a tendency of looking through surfaces.Item Virtual Zoomorphic Accessories for Enhancing Perception of Vehicle Dynamics in Real-Time(The Eurographics Association, 2023) Momota, Koji; Uranishi, Yuki; Kiyokawa, Kiyoshi; Orlosky, Jason; Ratsamee, Photchara; Kobayashi, Masato; Abey Campbell; Claudia Krogmeier; Gareth YoungThis research introduces virtual zoomorphic accessories, inspired by the animation principle of ''follow-through and overlapping action,'' to enhance pedestrian comprehension of vehicle speed. We employed dynamic rabbit ear-like accessories on vehicles as a visual representation. The animation offers pedestrians an intuitive sense of the vehicle's speed. We conducted an experiment using accessories that visualize speed in videos. The results indicate that such animated zoomorphic accessories can bolster understanding of vehicle behavior.