ICAT-EGVE2022
Permanent URI for this collection
Browse
Browsing ICAT-EGVE2022 by Issue Date
Now showing 1 - 18 of 18
Results Per Page
Sort Options
Item Manipulating the Sense of Embodiment in Virtual Reality: a Study of the Interactions Between the Senses of Agency, Self-location and Ownership(The Eurographics Association, 2022) Guy, Martin; Jeunet-Kelway, Camille; Moreau, Guillaume; Normand, Jean-Marie; Hideaki Uchiyama; Jean-Marie NormandIn Virtual Reality (VR), the Sense of Embodiment (SoE) corresponds to the feeling of controlling and owning a virtual body, usually referred to as an avatar. The SoE is generally divided into three components: the Sense of Agency (SoA) which characterises the level of control of the user over the avatar, the Sense of Self-Location (SoSL) which is the feeling to be located in the avatar and the Sense of Body-Ownership (SoBO) that represents the attribution of the virtual body to the user. While previous studies showed that the SoE can be manipulated by disturbing either the SoA, the SoBO or the SoSL, the relationships and interactions between these three components still remain unclear. In this paper, we aim at extending the understanding of the SoE and the interactions between its components by 1) experimentally manipulating them in VR via a biased visual feedback, and 2) understanding if each sub-component can be selectively altered or not. To do so, we designed a within-subject experiment where 47 right-handed participants had to perform movements of their right-hand under different experimental conditions impacting the sub-components of embodiment: the SoA was modified by impacting the control of the avatar with visual biased feedback, the SoBO was altered by modifying the realism of the virtual right hand (anthropomorphic cartoon hand or non-anthropomorphic stick ''fingers'') and the SoSL was controlled via the user's point of view (first or third person). After each trial, participants rated their level of agency, ownership and self-location on a 7-item Likert scale. Results' analysis revealed that the three components could not be selectively altered in this experiment. Nevertheless, these preliminary results pave the way to further studies.Item Evaluating Techniques to Share Hand Gestures for Remote Collaboration using Top-Down Projection in a Virtual Environment(The Eurographics Association, 2022) Teo, Theophilus; Sakurada, Kuniharu; Fukuoka, Masaaki; Sugimoto, Maki; Hideaki Uchiyama; Jean-Marie NormandSharing hand gestures in a remote collaboration offers natural and expressive communication between collaborators. Proposed techniques allow sharing dependent (attached to something) or independent (no attachment) hand gestures in an immersive remote collaboration. However, there are research gaps for sharing hand gestures using different techniques and how it impacts user behaviour and performance. In this paper, we propose an evaluation study to compare sharing dependent and independent hand gestures. We developed a prototype, supporting three techniques of sharing hand gestures: Attached to Local, Attached to Object, and Independent Hands. Also, we use top-down projection, an easy-to-setup method to share a local user's environment with a remote user. We compared the three techniques and found that independent hands help a remote user guide a local user in an object interaction task quicker than hands attached to the local user. It also gives clearer instruction than dependent hands despite limited depth perception caused by top-down projection. A similar trend is also found in remote users' preferences.Item A Data Collection Protocol, Tool and Analysis for the Mapping of Speech Volume to Avatar Facial Animation(The Eurographics Association, 2022) Miyawaki, Ryosuke; Perusquia-Hernandez, Monica; Isoyama, Naoya; Uchiyama, Hideaki; Kiyokawa, Kiyoshi; Hideaki Uchiyama; Jean-Marie NormandKnowing the relationship between speech-related facial movement and speech is important for avatar animation. Accurate facial displays are necessary to convey perceptual speech characteristics fully. Recently, an effort has been made to infer the relationship between facial movement and speech with data-driven methodologies using computer vision. To this aim, we propose to use blendshape-based facial movement tracking, because it can be easily translated to avatar movement. Furthermore, we present a protocol for audio-visual and behavioral data collection and a tool running on WEB that aids in collecting and synchronizing data. As a start, we provide a database of six Japanese participants reading emotion-related scripts at different volume levels. Using this methodology, we found a relationship between speech volume and facial movement around the nose, cheek, mouth, and head pitch. We hope that our protocols, WEB-based tool, and collected data will be useful for other scientists to derive models for avatar animation.Item Gaze Guidance in the Real-world by Changing Color Saturation of Objects(The Eurographics Association, 2022) Miyamoto, Junpei; Koike, Hideki; Amano, Toshiyuki; Hideaki Uchiyama; Jean-Marie NormandIn this study, we propose a method for real-world gaze guidance by projecting an image onto a real-world object and changing its appearance based on visual saliency. In the proposed method, an image of the object is first acquired. Next, the image is changed such that the visual prominence of the object is increased and the image is changed so that the visual prominence of other parts of the object is decreased. Finally, the modified image is re-projected onto the object itself. Consequently, the object's appearance and visual prominence are altered, and the user's gaze is focused on the desired object. In this study, we propose an image processing method that changes the saturation of an object. We call this the "saturation filter." A coaxial projector-camera system was used to apply the proposed gaze guidance method proposed in this study to a 3D object. The coaxial projector-camera system does not need to be recalibrated when an object moves. In this study, two experiments were conducted to verify the effectiveness of the proposed method in guiding a viewer's gaze. As a result, it was confirmed that the proposed method can achieve the effect of gaze guidance.Item Could you Relax in an Artistic Co-creative Virtual Reality Experience?(The Eurographics Association, 2022) Lomet, Julien; Gaugne, Ronan; Gouranton, Valérie; Hideaki Uchiyama; Jean-Marie NormandOur work contributes to the design and study of artistic collaborative virtual environments through the presentation of immersive and interactive digital artwork installation and the evaluation of the impact of the experience on visitor's emotional state. The experience is centered on a dance performance, involves collaborative spectators who are engaged to the experience through full-body movements, and is structured in three times, a time of relaxation and discovery of the universe, a time of co-creation and a time of co-active contemplation. The collaborative artwork ''Creative Harmony'', was designed within a multidisciplinary team of artists, researchers and computer scientists from different laboratories. The aesthetic of the artistic environment is inspired by the German Romantism painting from 19th century. In order to foster co-presence, each participant of the experience is associated to an avatar that aims to represent both its body and movements. The music is an original composition designed to develop a peaceful and meditative ambiance to the universe of ''Creative Harmony''. The evaluation of the impact on visitor's mood is based on "Brief Mood Introspection Scale" (BMIS), a standard tool widely used in psychological and medical context. We also present an assessment of the experience through the analysis of questionnaires filled by the visitors. We observed a positive increase in the Positive-Tired indicator and a decrease in the Negative-Relaxed indicator, demonstrating the relaxing capabilities of the immersive virtual environment.Item Progressive Tearing and Cutting of Soft-bodies in High-performance Virtual Reality(The Eurographics Association, 2022) Kamarianakis, Manos; Protopsaltis, Antonis; Angelis, Dimitris; Tamiolakis, Michail; Papagiannakis, George; Hideaki Uchiyama; Jean-Marie NormandWe present an algorithm that allows a user within a virtual environment to perform real-time unconstrained cuts or consecutive tears, i.e., progressive, continuous fractures on a deformable rigged and soft-body mesh model in high-performance 10ms. In order to recreate realistic results for different physically-principled materials such as sponges, hard or soft tissues, we incorporate a novel soft-body deformation, via a particle system layered on-top of a linear-blend skinning model. Our framework allows the simulation of realistic, surgical-grade cuts and continuous tears, especially valuable in the context of medical VR training. In order to achieve high performance in VR, our algorithms are based on Euclidean geometric predicates on the rigged mesh, without requiring any specific model pre-processing. The contribution of this work lies on the fact that current frameworks supporting similar kinds of model tearing, either do not operate in high-performance real-time or only apply to predefined tears. The framework presented allows the user to freely cut or tear a 3D mesh model in a consecutive way, under 10ms, while preserving its soft-body behaviour and/or allowing further animation.Item Towards Improving Educational Virtual Reality by Classifying Distraction using Deep Learning(The Eurographics Association, 2022) Khokhar, Adil; Borst, Christoph W.; Hideaki Uchiyama; Jean-Marie NormandDistractions can cause students to miss out on critical information in educational Virtual Reality (VR) environments. Our work uses generalized features (angular velocities, positional velocities, pupil diameter, and eye openness) extracted from VR headset sensor data (head-tracking, hand-tracking, and eye-tracking) to train a deep CNN-LSTM classifier to detect distractors in our educational VR environment. We present preliminary results demonstrating a 94.93% accuracy for our classifier, an improvement in both the accuracy and generality of features used over two recent approaches. We believe that our work can be used to improve educational VR by providing a more accurate and generalizable approach for distractor detection.Item Exploring EEG-Annotated Affective Animations in Virtual Reality: Suggestions for Improvement(The Eurographics Association, 2022) Krogmeier, Claudia; Mousas, Christos; Hideaki Uchiyama; Jean-Marie NormandIn this work, we recorded brain activity data from participants who viewed 12 affective character animations in virtual reality. Frontal alpha asymmetry (FAA) scores were calculated from electroencephalography (EEG) data to understand objective affective responses to these animations. A subset of these animations were then annotated as either low FAA (meaning they elicited lower FAA responses), or high FAA (meaning they elicited higher FAA responses). Next, these annotated animations were used in a primary 2×2 study in which we a) examined if we could replicate FAA responses to low FAA and high FAA animations in a subsequent study, and b) investigated how the number of characters in the VR environment would influence FAA responses. Additionally, we compared FAA to self-reported affective responses to the four conditions (one character, low FAA; one character, high FAA; four characters, low FAA; four characters, high FAA). In this way, our research seeks to better understand objective and subjective emotional responses in VR. Results suggest that annotated FAA may not inform FAA responses to affective animations in a subsequent study when more characters are present. However, self-reported affective responses to the four conditions is in line with FAA annotated responses. We offer suggestions for the development of specific affective experiences in VR which are based on preliminary brain activity data.Item Characteristics of Background Color Shifts Caused by Optical See-Through Head-Mounted Displays(The Eurographics Association, 2022) Hirobe, Daichi; Uranishi, Yuki; Orlosky, Jason; Shirai, Shizuka; Ratsamee, Photchara; Takemura, Haruo; Hideaki Uchiyama; Jean-Marie NormandOptical see-through head-mounted displays (OST-HMDs) have been increasingly used in many applications as Augmented Reality (AR) support devices. However, problems still exist that prevent their use as general-purpose devices. One of these issues is the color blending problem. This is the problem in which light from the background overlaps with light from the OST-HMD and shifts the color of OST-HMD's light from its intended display intensity and color. Though color compensation methods exist, in order to properly compensate for light shifts, we need to know how the background color will affect the light that eventually hits the user's eye when combined with the OST-HMD image. In this paper, we study how background colors shift as a result of passing through the OST-HMD's optics in order to better inform the development of color compensation methods. We measured the background color objectively for three off-the-shelf OST-HMDs and evaluated results. We found that all three OST-HMDs shift background color to a perceptible degree and that the degree of shift depends on the original background color. We also investigated how the degree of shift differs between different areas on the OST-HMD screens and from different measuring angles. The results showed that the background color shift depends on both the area and angle measured for some OST-HMDs.Item OmniTiles - A User-Customizable Display Using An Omni-Directional Camera Projector System(The Eurographics Association, 2022) Hoffard, Jana; Miyafuji, Shio; Pardomuan, Jefferson; Sato, Toshiki; Koike, Hideki; Hideaki Uchiyama; Jean-Marie NormandWe present OmniTiles, a manually changeable interface that enables the user to customize their own display. This is achieved by using tiles in basic shapes that are clipped together via magnets. The created structures are then placed on top of a cameraprojector set up to track the single tiles and project onto them. The generation of different structures requires no activation mechanism or prior technical knowledge by the user. The 3D printed tiles are robust and cost-efficient, making the system particularly suited for non-experts such as families with children. First, we explain the creation process of our tiles and the implementation of the system. We then demonstrate the flexibility of our system via applications unique to our tile approach and discuss the limitations and future plans for our system.Item ProGenVR: Natural Interactions for Procedural Content Generation in VR(The Eurographics Association, 2022) Carvalho, Bruno; Mendes, Daniel; Coelho, António; Rodrigues, Rui; Hideaki Uchiyama; Jean-Marie Normand3D content creation for virtual worlds is a difficult task, requiring specialized tools based typically on a WIMP interface for modelling, composition and animation. But these interfaces pose several limitations, namely regarding the 2D-3D mapping required both for input and output. To overcome such limitations, VR modelling approaches have been proposed. However, translating relevant tools for creating large 3D scenes to VR settings is not trivial. Procedural content generation (PCG) is one such tool that allows content to be automatically generated following a set of parameterized rules. In this work, we propose a novel approach for immersive 3D modelling based on a set of procedural rules for content generation and natural interactions to bridge the gap between immersive content creation and PCG. We developed a prototype implementing our approach and conducted a user evaluation to assess its applicability. Results suggest that the cost of time and mental effort associated with the rules' definition can be compensated by the saved time and physical effort when creating complex scenes.Item ICAT-EGVE 2022 - International Conference on Artificial Reality and Telexistence and Eurographics Symposium on Virtual Environments(The Eurographics Association, 2022) Jean-Marie Normand; Hideaki Uchiyama; Jean-Marie Normand; Hideaki UchiyamaItem A Rendering Method of Microdisplay Image to Expand Pupil Movable Region without Artifacts for Lenslet Array Near-Eye Displays(The Eurographics Association, 2022) Ye, Bi; Fujimoto, Yuichiro; Sawabe, Taishi; Kanbara, Masayuki; Lugtenberg, Geert; Kato, Hirokazu; Hideaki Uchiyama; Jean-Marie NormandNear-eye displays (NEDs) with lenslet array (LA) are a technological advancement that generates a virtual image in the observer's field of view (FOV). Although this technology is useful for designing lightweight NEDs, undesirable artifacts (i.e., cross-talk) occur when the user's pupil becomes larger than the pupil practical movable region (PPMR) or moves out of the PPMR. We proposed a rendering method for microdisplay images that takes pupil size into account and included the idea of pupil margin in the ray tracing process. Ray lights emitted by one microdisplay pixel (MP) enter the pupil and pupil margin area after passing through a number of lenses. Each lens at the MP corresponds to one virtual pixel (VP) on the virtual image plane. The weight of each VP is the intersection area between the ray light column and the pupil and pupil margin divided by the sum of intersecting spaces between all the ray light columns generated by the MP and the pupil and pupil margin. The value of each MP is determined by the number of VPs and the related weight. Through retina image simulation studies, we confirmed that the proposed rendering approach substantially enlarges PPMR to accommodate large pupil diameters and wide transition distances while reducing eye relief to an optimal (sunglasses) distance.Item Comparing Modalities to Communicate Movement Amplitude During Tool Manipulation in a Shared Learning Virtual Environment(The Eurographics Association, 2022) Simon, Cassandre; Otmane, Samir; Chellali, Amine; Hideaki Uchiyama; Jean-Marie NormandShared immersive environments are used to teach technical skills and communicate relevant information. However, designing the appropriate interfaces and interactions to support this communication process remains an open issue. We explore using three modalities to communicate movement amplitude during tool manipulation tasks in a shared immersive environment. The haptic, visual, and verbal modalities were used separately to instruct a learner about the amplitude of the movements to perform in the 3D space. The user study comparing these modalities shows that instructions given through the visual modality permitted to decrease the distance estimation error. In contrast, the haptic modality helped the learners perform the task significantly faster. The verbal modality significantly increased the perceived sense of copresence but was the least preferred modality. This research contributes to understanding the importance of each modality when communicating spatial skills in a shared immersive environment. The results suggest that combining modalities could be the most appropriate way to transfer movement amplitude information to a learner by improving performance and user experience. These findings can enhance the design of immersive collaborative systems and open new perspectives for further research on the effectiveness of multimodal interaction to support learning technical skills in VR. Designed tools can be used in different fields, such as medical teaching applications.Item AR Object Layout Method Using Miniature Room Generated from Depth Data(The Eurographics Association, 2022) Ihara, Keiichi; Kawaguchi, Ikkaku; Hideaki Uchiyama; Jean-Marie NormandIn augmented reality (AR), users can place virtual objects anywhere in a real-world room, called AR layout. Although several object manipulation techniques have been proposed in AR, it is difficult to use them for AR layout owing to the difficulty in freely changing the position and size of virtual objects. In this study, we make the World-in-Miniature (WIM) technique available in AR to support AR layout. The WIM technique is a manipulation technique that uses miniatures, which has been proposed as a manipulation technique for virtual reality (VR). Our system uses the AR device's depth sensors to acquire a mesh of the room in real-time to create and update a miniature of a room in real-time. In our system, users can use miniature objects to move virtual objects to arbitrary positions and scale them to arbitrary sizes. In addition, because the miniature object can be manipulated instead of the real-scale object, we assumed that our system will shorten the placement time and reduce the workload of the user. In our previous study, we created a prototype and investigated the properties of manipulating miniature objects in AR. In this study, we conducted an experiment to evaluate how our system can support AR layout. To conduct a task close to the actual use, we used various objects and made the participants design an AR layout of their own will. The results showed that our system significantly reduced workload in physical and temporal demand. Although, there was no significant difference in the total manipulation time.Item An Integrated Ducted Fan-Based Multi-Directional Force Feedback with a Head Mounted Display(The Eurographics Association, 2022) Watanabe, Koki; Nakamura, Fumihiko; Sakurada, Kuniharu; Teo, Theophilus; Sugimoto, Maki; Hideaki Uchiyama; Jean-Marie NormandAdding force feedback to virtual reality applications enhances the immersive experience. We propose a prototype, featuring head-based multi-directional force feedback in a virtual environment. We designed the prototype by integrating four ducted fans into a head-mounted display. Our technical evaluation of the ducted fan revealed the force characteristics of the ducted fan, including presentable power, sound level, and latency. In the first part of our study, we investigated the minimum force that a user can perceive in different directions (forward/backward force; up/down/left/right rotational force). The result suggested the absolute detection threshold for each directional force. Following that, we evaluated the impact of using force feedback through an immersive flight simulation in the second part of our study. The result indicates that our technique significantly improved user enjoyment, comfort, and visual-and-tactile perception, and reduced simulator sickness in an immersive flight simulation.Item FoReCast: Real-time Foveated Rendering and Unicasting for Immersive Remote Telepresence(The Eurographics Association, 2022) Tefera, Yonas T.; Mazzanti, Dario; Anastasi, Sara; Caldwell, Darwin G.; Fiorini, Paolo; Deshpande, Nikhil; Hideaki Uchiyama; Jean-Marie NormandRapidly growing modern virtual reality (VR) interfaces are increasingly used as visualization and interaction media in 3D telepresence systems. Remote environments scanned using RGB-D cameras and represented as dense point-clouds are being used to visualize remote environments in VR in real-time to increase the user's immersion. To this end, such interfaces require high quality, low latency, and high throughput transmission. In other words, the entire system pipeline from data acquisition to its visualization in VR has to be optimized for high performance. Point-cloud data particularly suffers from network latency and throughput limitations that negatively impact user experience in telepresence. The human visual system provides an insight into approaching these challenges. Human eyes have their sharpest visual acuity at the center of their field-of-view, and this falls off towards the periphery. This visual acuity fall-off was taken as an inspiration to design a novel immersive 3D data visualization framework to facilitate the processing, transmission, and visualization in VR of dense point-clouds. The proposed FoReCast framework, shows significant reductions in latency and throughput, higher than 60% in both. A preliminary user study shows that the framework does not significantly affect the user quality of experience in immersive remote telepresence.Item Cast-Shadow Removal for Cooperative Adaptive Appearance Manipulation(The Eurographics Association, 2022) Uesaka, Shoko; Amano, Toshiyuki; Hideaki Uchiyama; Jean-Marie NormandWe propose a framework to suppress over-illumination at the overlapping area created by cooperative adaptive appearance manipulation with independently working two projector-camera systems. The proposed method estimates projection overlapping in each projector-camera unit with neither geometrical mapping nor projection image sharing. Then, the method suppresses the projection illumination of the overlapping area by compensated reflectance estimation. Experimental results using two projector-camera systems confirm that the proposed method correctly detects overlapping areas adaptively and removes cast shadows that cause uneven illumination owing to cooperative appearance manipulation of a moving 3D target.