ICAT-EGVE2022

Permanent URI for this collection

Hiyoshi, Yokohama, Japan | November 30 - December 3, 2022

(for Posters and Demos see ICAT-EGVE 2022 - Posters and Demos)

Interaction
Comparing Modalities to Communicate Movement Amplitude During Tool Manipulation in a Shared Learning Virtual Environment
Cassandre Simon, Samir Otmane, and Amine Chellali
Cast-Shadow Removal for Cooperative Adaptive Appearance Manipulation
Shoko Uesaka and Toshiyuki Amano
Evaluating Techniques to Share Hand Gestures for Remote Collaboration using Top-Down Projection in a Virtual Environment
Theophilus Teo, Kuniharu Sakurada, Masaaki Fukuoka, and Maki Sugimoto
A Data Collection Protocol, Tool and Analysis for the Mapping of Speech Volume to Avatar Facial Animation
Ryosuke Miyawaki, Monica Perusquia-Hernandez, Naoya Isoyama, Hideaki Uchiyama, and Kiyoshi Kiyokawa
AR Object Layout Method Using Miniature Room Generated from Depth Data
Keiichi Ihara and Ikkaku Kawaguchi
Haptics and Remote
Progressive Tearing and Cutting of Soft-bodies in High-performance Virtual Reality
Manos Kamarianakis, Antonis Protopsaltis, Dimitris Angelis, Michail Tamiolakis, and George Papagiannakis
An Integrated Ducted Fan-Based Multi-Directional Force Feedback with a Head Mounted Display
Koki Watanabe, Fumihiko Nakamura, Kuniharu Sakurada, Theophilus Teo, and Maki Sugimoto
ProGenVR: Natural Interactions for Procedural Content Generation in VR
Bruno Carvalho, Daniel Mendes, António Coelho, and Rui Rodrigues
FoReCast: Real-time Foveated Rendering and Unicasting for Immersive Remote Telepresence
Yonas T. Tefera, Dario Mazzanti, Sara Anastasi, Darwin G. Caldwell, Paolo Fiorini, and Nikhil Deshpande
Cognition
Towards Improving Educational Virtual Reality by Classifying Distraction using Deep Learning
Adil Khokhar and Christoph W. Borst
Gaze Guidance in the Real-world by Changing Color Saturation of Objects
Junpei Miyamoto, Hideki Koike, and Toshiyuki Amano
Manipulating the Sense of Embodiment in Virtual Reality: a Study of the Interactions Between the Senses of Agency, Self-location and Ownership
Martin Guy, Camille Jeunet-Kelway, Guillaume Moreau, and Jean-Marie Normand
Could you Relax in an Artistic Co-creative Virtual Reality Experience?
Julien Lomet, Ronan Gaugne, and Valérie Gouranton
Exploring EEG-Annotated Affective Animations in Virtual Reality: Suggestions for Improvement
Claudia Krogmeier and Christos Mousas
Displays/Rendering
A Rendering Method of Microdisplay Image to Expand Pupil Movable Region without Artifacts for Lenslet Array Near-Eye Displays
Bi Ye, Yuichiro Fujimoto, Taishi Sawabe, Masayuki Kanbara, Geert Lugtenberg, and Hirokazu Kato
Characteristics of Background Color Shifts Caused by Optical See-Through Head-Mounted Displays
Daichi Hirobe, Yuki Uranishi, Jason Orlosky, Shizuka Shirai, Photchara Ratsamee, and Haruo Takemura
OmniTiles - A User-Customizable Display Using An Omni-Directional Camera Projector System
Jana Hoffard, Shio Miyafuji, Jefferson Pardomuan, Toshiki Sato, and Hideki Koike

BibTeX (ICAT-EGVE2022)
@inproceedings{
10.2312:egve.20222021,
booktitle = {
ICAT-EGVE 2022 - International Conference on Artificial Reality and Telexistence and Eurographics Symposium on Virtual Environments},
editor = {
Jean-Marie Normand
 and
Hideaki Uchiyama
}, title = {{
ICAT-EGVE 2022 - International Conference on Artificial Reality and Telexistence and Eurographics Symposium on Virtual Environments}},
author = {
Jean-Marie Normand
 and
Hideaki Uchiyama
}, year = {
2022},
publisher = {
The Eurographics Association},
ISSN = {1727-530X},
ISBN = {978-3-03868-179-3},
DOI = {
10.2312/egve.20222021}
}
@inproceedings{
10.2312:egve.20221270,
booktitle = {
ICAT-EGVE 2022 - International Conference on Artificial Reality and Telexistence and Eurographics Symposium on Virtual Environments},
editor = {
Hideaki Uchiyama
 and
Jean-Marie Normand
}, title = {{
Comparing Modalities to Communicate Movement Amplitude During Tool Manipulation in a Shared Learning Virtual Environment}},
author = {
Simon, Cassandre
 and
Otmane, Samir
 and
Chellali, Amine
}, year = {
2022},
publisher = {
The Eurographics Association},
ISSN = {1727-530X},
ISBN = {978-3-03868-179-3},
DOI = {
10.2312/egve.20221270}
}
@inproceedings{
10.2312:egve.20221273,
booktitle = {
ICAT-EGVE 2022 - International Conference on Artificial Reality and Telexistence and Eurographics Symposium on Virtual Environments},
editor = {
Hideaki Uchiyama
 and
Jean-Marie Normand
}, title = {{
A Data Collection Protocol, Tool and Analysis for the Mapping of Speech Volume to Avatar Facial Animation}},
author = {
Miyawaki, Ryosuke
 and
Perusquia-Hernandez, Monica
 and
Isoyama, Naoya
 and
Uchiyama, Hideaki
 and
Kiyokawa, Kiyoshi
}, year = {
2022},
publisher = {
The Eurographics Association},
ISSN = {1727-530X},
ISBN = {978-3-03868-179-3},
DOI = {
10.2312/egve.20221273}
}
@inproceedings{
10.2312:egve.20221272,
booktitle = {
ICAT-EGVE 2022 - International Conference on Artificial Reality and Telexistence and Eurographics Symposium on Virtual Environments},
editor = {
Hideaki Uchiyama
 and
Jean-Marie Normand
}, title = {{
Evaluating Techniques to Share Hand Gestures for Remote Collaboration using Top-Down Projection in a Virtual Environment}},
author = {
Teo, Theophilus
 and
Sakurada, Kuniharu
 and
Fukuoka, Masaaki
 and
Sugimoto, Maki
}, year = {
2022},
publisher = {
The Eurographics Association},
ISSN = {1727-530X},
ISBN = {978-3-03868-179-3},
DOI = {
10.2312/egve.20221272}
}
@inproceedings{
10.2312:egve.20221271,
booktitle = {
ICAT-EGVE 2022 - International Conference on Artificial Reality and Telexistence and Eurographics Symposium on Virtual Environments},
editor = {
Hideaki Uchiyama
 and
Jean-Marie Normand
}, title = {{
Cast-Shadow Removal for Cooperative Adaptive Appearance Manipulation}},
author = {
Uesaka, Shoko
 and
Amano, Toshiyuki
}, year = {
2022},
publisher = {
The Eurographics Association},
ISSN = {1727-530X},
ISBN = {978-3-03868-179-3},
DOI = {
10.2312/egve.20221271}
}
@inproceedings{
10.2312:egve.20221274,
booktitle = {
ICAT-EGVE 2022 - International Conference on Artificial Reality and Telexistence and Eurographics Symposium on Virtual Environments},
editor = {
Hideaki Uchiyama
 and
Jean-Marie Normand
}, title = {{
AR Object Layout Method Using Miniature Room Generated from Depth Data}},
author = {
Ihara, Keiichi
 and
Kawaguchi, Ikkaku
}, year = {
2022},
publisher = {
The Eurographics Association},
ISSN = {1727-530X},
ISBN = {978-3-03868-179-3},
DOI = {
10.2312/egve.20221274}
}
@inproceedings{
10.2312:egve.20221275,
booktitle = {
ICAT-EGVE 2022 - International Conference on Artificial Reality and Telexistence and Eurographics Symposium on Virtual Environments},
editor = {
Hideaki Uchiyama
 and
Jean-Marie Normand
}, title = {{
Progressive Tearing and Cutting of Soft-bodies in High-performance Virtual Reality}},
author = {
Kamarianakis, Manos
 and
Protopsaltis, Antonis
 and
Angelis, Dimitris
 and
Tamiolakis, Michail
 and
Papagiannakis, George
}, year = {
2022},
publisher = {
The Eurographics Association},
ISSN = {1727-530X},
ISBN = {978-3-03868-179-3},
DOI = {
10.2312/egve.20221275}
}
@inproceedings{
10.2312:egve.20221276,
booktitle = {
ICAT-EGVE 2022 - International Conference on Artificial Reality and Telexistence and Eurographics Symposium on Virtual Environments},
editor = {
Hideaki Uchiyama
 and
Jean-Marie Normand
}, title = {{
An Integrated Ducted Fan-Based Multi-Directional Force Feedback with a Head Mounted Display}},
author = {
Watanabe, Koki
 and
Nakamura, Fumihiko
 and
Sakurada, Kuniharu
 and
Teo, Theophilus
 and
Sugimoto, Maki
}, year = {
2022},
publisher = {
The Eurographics Association},
ISSN = {1727-530X},
ISBN = {978-3-03868-179-3},
DOI = {
10.2312/egve.20221276}
}
@inproceedings{
10.2312:egve.20221277,
booktitle = {
ICAT-EGVE 2022 - International Conference on Artificial Reality and Telexistence and Eurographics Symposium on Virtual Environments},
editor = {
Hideaki Uchiyama
 and
Jean-Marie Normand
}, title = {{
ProGenVR: Natural Interactions for Procedural Content Generation in VR}},
author = {
Carvalho, Bruno
 and
Mendes, Daniel
 and
Coelho, António
 and
Rodrigues, Rui
}, year = {
2022},
publisher = {
The Eurographics Association},
ISSN = {1727-530X},
ISBN = {978-3-03868-179-3},
DOI = {
10.2312/egve.20221277}
}
@inproceedings{
10.2312:egve.20221279,
booktitle = {
ICAT-EGVE 2022 - International Conference on Artificial Reality and Telexistence and Eurographics Symposium on Virtual Environments},
editor = {
Hideaki Uchiyama
 and
Jean-Marie Normand
}, title = {{
Towards Improving Educational Virtual Reality by Classifying Distraction using Deep Learning}},
author = {
Khokhar, Adil
 and
Borst, Christoph W.
}, year = {
2022},
publisher = {
The Eurographics Association},
ISSN = {1727-530X},
ISBN = {978-3-03868-179-3},
DOI = {
10.2312/egve.20221279}
}
@inproceedings{
10.2312:egve.20221278,
booktitle = {
ICAT-EGVE 2022 - International Conference on Artificial Reality and Telexistence and Eurographics Symposium on Virtual Environments},
editor = {
Hideaki Uchiyama
 and
Jean-Marie Normand
}, title = {{
FoReCast: Real-time Foveated Rendering and Unicasting for Immersive Remote Telepresence}},
author = {
Tefera, Yonas T.
 and
Mazzanti, Dario
 and
Anastasi, Sara
 and
Caldwell, Darwin G.
 and
Fiorini, Paolo
 and
Deshpande, Nikhil
}, year = {
2022},
publisher = {
The Eurographics Association},
ISSN = {1727-530X},
ISBN = {978-3-03868-179-3},
DOI = {
10.2312/egve.20221278}
}
@inproceedings{
10.2312:egve.20221280,
booktitle = {
ICAT-EGVE 2022 - International Conference on Artificial Reality and Telexistence and Eurographics Symposium on Virtual Environments},
editor = {
Hideaki Uchiyama
 and
Jean-Marie Normand
}, title = {{
Gaze Guidance in the Real-world by Changing Color Saturation of Objects}},
author = {
Miyamoto, Junpei
 and
Koike, Hideki
 and
Amano, Toshiyuki
}, year = {
2022},
publisher = {
The Eurographics Association},
ISSN = {1727-530X},
ISBN = {978-3-03868-179-3},
DOI = {
10.2312/egve.20221280}
}
@inproceedings{
10.2312:egve.20221282,
booktitle = {
ICAT-EGVE 2022 - International Conference on Artificial Reality and Telexistence and Eurographics Symposium on Virtual Environments},
editor = {
Hideaki Uchiyama
 and
Jean-Marie Normand
}, title = {{
Could you Relax in an Artistic Co-creative Virtual Reality Experience?}},
author = {
Lomet, Julien
 and
Gaugne, Ronan
 and
Gouranton, Valérie
}, year = {
2022},
publisher = {
The Eurographics Association},
ISSN = {1727-530X},
ISBN = {978-3-03868-179-3},
DOI = {
10.2312/egve.20221282}
}
@inproceedings{
10.2312:egve.20221281,
booktitle = {
ICAT-EGVE 2022 - International Conference on Artificial Reality and Telexistence and Eurographics Symposium on Virtual Environments},
editor = {
Hideaki Uchiyama
 and
Jean-Marie Normand
}, title = {{
Manipulating the Sense of Embodiment in Virtual Reality: a Study of the Interactions Between the Senses of Agency, Self-location and Ownership}},
author = {
Guy, Martin
 and
Jeunet-Kelway, Camille
 and
Moreau, Guillaume
 and
Normand, Jean-Marie
}, year = {
2022},
publisher = {
The Eurographics Association},
ISSN = {1727-530X},
ISBN = {978-3-03868-179-3},
DOI = {
10.2312/egve.20221281}
}
@inproceedings{
10.2312:egve.20221283,
booktitle = {
ICAT-EGVE 2022 - International Conference on Artificial Reality and Telexistence and Eurographics Symposium on Virtual Environments},
editor = {
Hideaki Uchiyama
 and
Jean-Marie Normand
}, title = {{
Exploring EEG-Annotated Affective Animations in Virtual Reality: Suggestions for Improvement}},
author = {
Krogmeier, Claudia
 and
Mousas, Christos
}, year = {
2022},
publisher = {
The Eurographics Association},
ISSN = {1727-530X},
ISBN = {978-3-03868-179-3},
DOI = {
10.2312/egve.20221283}
}
@inproceedings{
10.2312:egve.20221285,
booktitle = {
ICAT-EGVE 2022 - International Conference on Artificial Reality and Telexistence and Eurographics Symposium on Virtual Environments},
editor = {
Hideaki Uchiyama
 and
Jean-Marie Normand
}, title = {{
Characteristics of Background Color Shifts Caused by Optical See-Through Head-Mounted Displays}},
author = {
Hirobe, Daichi
 and
Uranishi, Yuki
 and
Orlosky, Jason
 and
Shirai, Shizuka
 and
Ratsamee, Photchara
 and
Takemura, Haruo
}, year = {
2022},
publisher = {
The Eurographics Association},
ISSN = {1727-530X},
ISBN = {978-3-03868-179-3},
DOI = {
10.2312/egve.20221285}
}
@inproceedings{
10.2312:egve.20221284,
booktitle = {
ICAT-EGVE 2022 - International Conference on Artificial Reality and Telexistence and Eurographics Symposium on Virtual Environments},
editor = {
Hideaki Uchiyama
 and
Jean-Marie Normand
}, title = {{
A Rendering Method of Microdisplay Image to Expand Pupil Movable Region without Artifacts for Lenslet Array Near-Eye Displays}},
author = {
Ye, Bi
 and
Fujimoto, Yuichiro
 and
Sawabe, Taishi
 and
Kanbara, Masayuki
 and
Lugtenberg, Geert
 and
Kato, Hirokazu
}, year = {
2022},
publisher = {
The Eurographics Association},
ISSN = {1727-530X},
ISBN = {978-3-03868-179-3},
DOI = {
10.2312/egve.20221284}
}
@inproceedings{
10.2312:egve.20221286,
booktitle = {
ICAT-EGVE 2022 - International Conference on Artificial Reality and Telexistence and Eurographics Symposium on Virtual Environments},
editor = {
Hideaki Uchiyama
 and
Jean-Marie Normand
}, title = {{
OmniTiles - A User-Customizable Display Using An Omni-Directional Camera Projector System}},
author = {
Hoffard, Jana
 and
Miyafuji, Shio
 and
Pardomuan, Jefferson
 and
Sato, Toshiki
 and
Koike, Hideki
}, year = {
2022},
publisher = {
The Eurographics Association},
ISSN = {1727-530X},
ISBN = {978-3-03868-179-3},
DOI = {
10.2312/egve.20221286}
}

Browse

Recent Submissions

Now showing 1 - 18 of 18
  • Item
    ICAT-EGVE 2022 - International Conference on Artificial Reality and Telexistence and Eurographics Symposium on Virtual Environments
    (The Eurographics Association, 2022) Jean-Marie Normand; Hideaki Uchiyama; Jean-Marie Normand; Hideaki Uchiyama
  • Item
    Comparing Modalities to Communicate Movement Amplitude During Tool Manipulation in a Shared Learning Virtual Environment
    (The Eurographics Association, 2022) Simon, Cassandre; Otmane, Samir; Chellali, Amine; Hideaki Uchiyama; Jean-Marie Normand
    Shared immersive environments are used to teach technical skills and communicate relevant information. However, designing the appropriate interfaces and interactions to support this communication process remains an open issue. We explore using three modalities to communicate movement amplitude during tool manipulation tasks in a shared immersive environment. The haptic, visual, and verbal modalities were used separately to instruct a learner about the amplitude of the movements to perform in the 3D space. The user study comparing these modalities shows that instructions given through the visual modality permitted to decrease the distance estimation error. In contrast, the haptic modality helped the learners perform the task significantly faster. The verbal modality significantly increased the perceived sense of copresence but was the least preferred modality. This research contributes to understanding the importance of each modality when communicating spatial skills in a shared immersive environment. The results suggest that combining modalities could be the most appropriate way to transfer movement amplitude information to a learner by improving performance and user experience. These findings can enhance the design of immersive collaborative systems and open new perspectives for further research on the effectiveness of multimodal interaction to support learning technical skills in VR. Designed tools can be used in different fields, such as medical teaching applications.
  • Item
    A Data Collection Protocol, Tool and Analysis for the Mapping of Speech Volume to Avatar Facial Animation
    (The Eurographics Association, 2022) Miyawaki, Ryosuke; Perusquia-Hernandez, Monica; Isoyama, Naoya; Uchiyama, Hideaki; Kiyokawa, Kiyoshi; Hideaki Uchiyama; Jean-Marie Normand
    Knowing the relationship between speech-related facial movement and speech is important for avatar animation. Accurate facial displays are necessary to convey perceptual speech characteristics fully. Recently, an effort has been made to infer the relationship between facial movement and speech with data-driven methodologies using computer vision. To this aim, we propose to use blendshape-based facial movement tracking, because it can be easily translated to avatar movement. Furthermore, we present a protocol for audio-visual and behavioral data collection and a tool running on WEB that aids in collecting and synchronizing data. As a start, we provide a database of six Japanese participants reading emotion-related scripts at different volume levels. Using this methodology, we found a relationship between speech volume and facial movement around the nose, cheek, mouth, and head pitch. We hope that our protocols, WEB-based tool, and collected data will be useful for other scientists to derive models for avatar animation.
  • Item
    Evaluating Techniques to Share Hand Gestures for Remote Collaboration using Top-Down Projection in a Virtual Environment
    (The Eurographics Association, 2022) Teo, Theophilus; Sakurada, Kuniharu; Fukuoka, Masaaki; Sugimoto, Maki; Hideaki Uchiyama; Jean-Marie Normand
    Sharing hand gestures in a remote collaboration offers natural and expressive communication between collaborators. Proposed techniques allow sharing dependent (attached to something) or independent (no attachment) hand gestures in an immersive remote collaboration. However, there are research gaps for sharing hand gestures using different techniques and how it impacts user behaviour and performance. In this paper, we propose an evaluation study to compare sharing dependent and independent hand gestures. We developed a prototype, supporting three techniques of sharing hand gestures: Attached to Local, Attached to Object, and Independent Hands. Also, we use top-down projection, an easy-to-setup method to share a local user's environment with a remote user. We compared the three techniques and found that independent hands help a remote user guide a local user in an object interaction task quicker than hands attached to the local user. It also gives clearer instruction than dependent hands despite limited depth perception caused by top-down projection. A similar trend is also found in remote users' preferences.
  • Item
    Cast-Shadow Removal for Cooperative Adaptive Appearance Manipulation
    (The Eurographics Association, 2022) Uesaka, Shoko; Amano, Toshiyuki; Hideaki Uchiyama; Jean-Marie Normand
    We propose a framework to suppress over-illumination at the overlapping area created by cooperative adaptive appearance manipulation with independently working two projector-camera systems. The proposed method estimates projection overlapping in each projector-camera unit with neither geometrical mapping nor projection image sharing. Then, the method suppresses the projection illumination of the overlapping area by compensated reflectance estimation. Experimental results using two projector-camera systems confirm that the proposed method correctly detects overlapping areas adaptively and removes cast shadows that cause uneven illumination owing to cooperative appearance manipulation of a moving 3D target.
  • Item
    AR Object Layout Method Using Miniature Room Generated from Depth Data
    (The Eurographics Association, 2022) Ihara, Keiichi; Kawaguchi, Ikkaku; Hideaki Uchiyama; Jean-Marie Normand
    In augmented reality (AR), users can place virtual objects anywhere in a real-world room, called AR layout. Although several object manipulation techniques have been proposed in AR, it is difficult to use them for AR layout owing to the difficulty in freely changing the position and size of virtual objects. In this study, we make the World-in-Miniature (WIM) technique available in AR to support AR layout. The WIM technique is a manipulation technique that uses miniatures, which has been proposed as a manipulation technique for virtual reality (VR). Our system uses the AR device's depth sensors to acquire a mesh of the room in real-time to create and update a miniature of a room in real-time. In our system, users can use miniature objects to move virtual objects to arbitrary positions and scale them to arbitrary sizes. In addition, because the miniature object can be manipulated instead of the real-scale object, we assumed that our system will shorten the placement time and reduce the workload of the user. In our previous study, we created a prototype and investigated the properties of manipulating miniature objects in AR. In this study, we conducted an experiment to evaluate how our system can support AR layout. To conduct a task close to the actual use, we used various objects and made the participants design an AR layout of their own will. The results showed that our system significantly reduced workload in physical and temporal demand. Although, there was no significant difference in the total manipulation time.
  • Item
    Progressive Tearing and Cutting of Soft-bodies in High-performance Virtual Reality
    (The Eurographics Association, 2022) Kamarianakis, Manos; Protopsaltis, Antonis; Angelis, Dimitris; Tamiolakis, Michail; Papagiannakis, George; Hideaki Uchiyama; Jean-Marie Normand
    We present an algorithm that allows a user within a virtual environment to perform real-time unconstrained cuts or consecutive tears, i.e., progressive, continuous fractures on a deformable rigged and soft-body mesh model in high-performance 10ms. In order to recreate realistic results for different physically-principled materials such as sponges, hard or soft tissues, we incorporate a novel soft-body deformation, via a particle system layered on-top of a linear-blend skinning model. Our framework allows the simulation of realistic, surgical-grade cuts and continuous tears, especially valuable in the context of medical VR training. In order to achieve high performance in VR, our algorithms are based on Euclidean geometric predicates on the rigged mesh, without requiring any specific model pre-processing. The contribution of this work lies on the fact that current frameworks supporting similar kinds of model tearing, either do not operate in high-performance real-time or only apply to predefined tears. The framework presented allows the user to freely cut or tear a 3D mesh model in a consecutive way, under 10ms, while preserving its soft-body behaviour and/or allowing further animation.
  • Item
    An Integrated Ducted Fan-Based Multi-Directional Force Feedback with a Head Mounted Display
    (The Eurographics Association, 2022) Watanabe, Koki; Nakamura, Fumihiko; Sakurada, Kuniharu; Teo, Theophilus; Sugimoto, Maki; Hideaki Uchiyama; Jean-Marie Normand
    Adding force feedback to virtual reality applications enhances the immersive experience. We propose a prototype, featuring head-based multi-directional force feedback in a virtual environment. We designed the prototype by integrating four ducted fans into a head-mounted display. Our technical evaluation of the ducted fan revealed the force characteristics of the ducted fan, including presentable power, sound level, and latency. In the first part of our study, we investigated the minimum force that a user can perceive in different directions (forward/backward force; up/down/left/right rotational force). The result suggested the absolute detection threshold for each directional force. Following that, we evaluated the impact of using force feedback through an immersive flight simulation in the second part of our study. The result indicates that our technique significantly improved user enjoyment, comfort, and visual-and-tactile perception, and reduced simulator sickness in an immersive flight simulation.
  • Item
    ProGenVR: Natural Interactions for Procedural Content Generation in VR
    (The Eurographics Association, 2022) Carvalho, Bruno; Mendes, Daniel; Coelho, António; Rodrigues, Rui; Hideaki Uchiyama; Jean-Marie Normand
    3D content creation for virtual worlds is a difficult task, requiring specialized tools based typically on a WIMP interface for modelling, composition and animation. But these interfaces pose several limitations, namely regarding the 2D-3D mapping required both for input and output. To overcome such limitations, VR modelling approaches have been proposed. However, translating relevant tools for creating large 3D scenes to VR settings is not trivial. Procedural content generation (PCG) is one such tool that allows content to be automatically generated following a set of parameterized rules. In this work, we propose a novel approach for immersive 3D modelling based on a set of procedural rules for content generation and natural interactions to bridge the gap between immersive content creation and PCG. We developed a prototype implementing our approach and conducted a user evaluation to assess its applicability. Results suggest that the cost of time and mental effort associated with the rules' definition can be compensated by the saved time and physical effort when creating complex scenes.
  • Item
    Towards Improving Educational Virtual Reality by Classifying Distraction using Deep Learning
    (The Eurographics Association, 2022) Khokhar, Adil; Borst, Christoph W.; Hideaki Uchiyama; Jean-Marie Normand
    Distractions can cause students to miss out on critical information in educational Virtual Reality (VR) environments. Our work uses generalized features (angular velocities, positional velocities, pupil diameter, and eye openness) extracted from VR headset sensor data (head-tracking, hand-tracking, and eye-tracking) to train a deep CNN-LSTM classifier to detect distractors in our educational VR environment. We present preliminary results demonstrating a 94.93% accuracy for our classifier, an improvement in both the accuracy and generality of features used over two recent approaches. We believe that our work can be used to improve educational VR by providing a more accurate and generalizable approach for distractor detection.
  • Item
    FoReCast: Real-time Foveated Rendering and Unicasting for Immersive Remote Telepresence
    (The Eurographics Association, 2022) Tefera, Yonas T.; Mazzanti, Dario; Anastasi, Sara; Caldwell, Darwin G.; Fiorini, Paolo; Deshpande, Nikhil; Hideaki Uchiyama; Jean-Marie Normand
    Rapidly growing modern virtual reality (VR) interfaces are increasingly used as visualization and interaction media in 3D telepresence systems. Remote environments scanned using RGB-D cameras and represented as dense point-clouds are being used to visualize remote environments in VR in real-time to increase the user's immersion. To this end, such interfaces require high quality, low latency, and high throughput transmission. In other words, the entire system pipeline from data acquisition to its visualization in VR has to be optimized for high performance. Point-cloud data particularly suffers from network latency and throughput limitations that negatively impact user experience in telepresence. The human visual system provides an insight into approaching these challenges. Human eyes have their sharpest visual acuity at the center of their field-of-view, and this falls off towards the periphery. This visual acuity fall-off was taken as an inspiration to design a novel immersive 3D data visualization framework to facilitate the processing, transmission, and visualization in VR of dense point-clouds. The proposed FoReCast framework, shows significant reductions in latency and throughput, higher than 60% in both. A preliminary user study shows that the framework does not significantly affect the user quality of experience in immersive remote telepresence.
  • Item
    Gaze Guidance in the Real-world by Changing Color Saturation of Objects
    (The Eurographics Association, 2022) Miyamoto, Junpei; Koike, Hideki; Amano, Toshiyuki; Hideaki Uchiyama; Jean-Marie Normand
    In this study, we propose a method for real-world gaze guidance by projecting an image onto a real-world object and changing its appearance based on visual saliency. In the proposed method, an image of the object is first acquired. Next, the image is changed such that the visual prominence of the object is increased and the image is changed so that the visual prominence of other parts of the object is decreased. Finally, the modified image is re-projected onto the object itself. Consequently, the object's appearance and visual prominence are altered, and the user's gaze is focused on the desired object. In this study, we propose an image processing method that changes the saturation of an object. We call this the "saturation filter." A coaxial projector-camera system was used to apply the proposed gaze guidance method proposed in this study to a 3D object. The coaxial projector-camera system does not need to be recalibrated when an object moves. In this study, two experiments were conducted to verify the effectiveness of the proposed method in guiding a viewer's gaze. As a result, it was confirmed that the proposed method can achieve the effect of gaze guidance.
  • Item
    Could you Relax in an Artistic Co-creative Virtual Reality Experience?
    (The Eurographics Association, 2022) Lomet, Julien; Gaugne, Ronan; Gouranton, Valérie; Hideaki Uchiyama; Jean-Marie Normand
    Our work contributes to the design and study of artistic collaborative virtual environments through the presentation of immersive and interactive digital artwork installation and the evaluation of the impact of the experience on visitor's emotional state. The experience is centered on a dance performance, involves collaborative spectators who are engaged to the experience through full-body movements, and is structured in three times, a time of relaxation and discovery of the universe, a time of co-creation and a time of co-active contemplation. The collaborative artwork ''Creative Harmony'', was designed within a multidisciplinary team of artists, researchers and computer scientists from different laboratories. The aesthetic of the artistic environment is inspired by the German Romantism painting from 19th century. In order to foster co-presence, each participant of the experience is associated to an avatar that aims to represent both its body and movements. The music is an original composition designed to develop a peaceful and meditative ambiance to the universe of ''Creative Harmony''. The evaluation of the impact on visitor's mood is based on "Brief Mood Introspection Scale" (BMIS), a standard tool widely used in psychological and medical context. We also present an assessment of the experience through the analysis of questionnaires filled by the visitors. We observed a positive increase in the Positive-Tired indicator and a decrease in the Negative-Relaxed indicator, demonstrating the relaxing capabilities of the immersive virtual environment.
  • Item
    Manipulating the Sense of Embodiment in Virtual Reality: a Study of the Interactions Between the Senses of Agency, Self-location and Ownership
    (The Eurographics Association, 2022) Guy, Martin; Jeunet-Kelway, Camille; Moreau, Guillaume; Normand, Jean-Marie; Hideaki Uchiyama; Jean-Marie Normand
    In Virtual Reality (VR), the Sense of Embodiment (SoE) corresponds to the feeling of controlling and owning a virtual body, usually referred to as an avatar. The SoE is generally divided into three components: the Sense of Agency (SoA) which characterises the level of control of the user over the avatar, the Sense of Self-Location (SoSL) which is the feeling to be located in the avatar and the Sense of Body-Ownership (SoBO) that represents the attribution of the virtual body to the user. While previous studies showed that the SoE can be manipulated by disturbing either the SoA, the SoBO or the SoSL, the relationships and interactions between these three components still remain unclear. In this paper, we aim at extending the understanding of the SoE and the interactions between its components by 1) experimentally manipulating them in VR via a biased visual feedback, and 2) understanding if each sub-component can be selectively altered or not. To do so, we designed a within-subject experiment where 47 right-handed participants had to perform movements of their right-hand under different experimental conditions impacting the sub-components of embodiment: the SoA was modified by impacting the control of the avatar with visual biased feedback, the SoBO was altered by modifying the realism of the virtual right hand (anthropomorphic cartoon hand or non-anthropomorphic stick ''fingers'') and the SoSL was controlled via the user's point of view (first or third person). After each trial, participants rated their level of agency, ownership and self-location on a 7-item Likert scale. Results' analysis revealed that the three components could not be selectively altered in this experiment. Nevertheless, these preliminary results pave the way to further studies.
  • Item
    Exploring EEG-Annotated Affective Animations in Virtual Reality: Suggestions for Improvement
    (The Eurographics Association, 2022) Krogmeier, Claudia; Mousas, Christos; Hideaki Uchiyama; Jean-Marie Normand
    In this work, we recorded brain activity data from participants who viewed 12 affective character animations in virtual reality. Frontal alpha asymmetry (FAA) scores were calculated from electroencephalography (EEG) data to understand objective affective responses to these animations. A subset of these animations were then annotated as either low FAA (meaning they elicited lower FAA responses), or high FAA (meaning they elicited higher FAA responses). Next, these annotated animations were used in a primary 2×2 study in which we a) examined if we could replicate FAA responses to low FAA and high FAA animations in a subsequent study, and b) investigated how the number of characters in the VR environment would influence FAA responses. Additionally, we compared FAA to self-reported affective responses to the four conditions (one character, low FAA; one character, high FAA; four characters, low FAA; four characters, high FAA). In this way, our research seeks to better understand objective and subjective emotional responses in VR. Results suggest that annotated FAA may not inform FAA responses to affective animations in a subsequent study when more characters are present. However, self-reported affective responses to the four conditions is in line with FAA annotated responses. We offer suggestions for the development of specific affective experiences in VR which are based on preliminary brain activity data.
  • Item
    Characteristics of Background Color Shifts Caused by Optical See-Through Head-Mounted Displays
    (The Eurographics Association, 2022) Hirobe, Daichi; Uranishi, Yuki; Orlosky, Jason; Shirai, Shizuka; Ratsamee, Photchara; Takemura, Haruo; Hideaki Uchiyama; Jean-Marie Normand
    Optical see-through head-mounted displays (OST-HMDs) have been increasingly used in many applications as Augmented Reality (AR) support devices. However, problems still exist that prevent their use as general-purpose devices. One of these issues is the color blending problem. This is the problem in which light from the background overlaps with light from the OST-HMD and shifts the color of OST-HMD's light from its intended display intensity and color. Though color compensation methods exist, in order to properly compensate for light shifts, we need to know how the background color will affect the light that eventually hits the user's eye when combined with the OST-HMD image. In this paper, we study how background colors shift as a result of passing through the OST-HMD's optics in order to better inform the development of color compensation methods. We measured the background color objectively for three off-the-shelf OST-HMDs and evaluated results. We found that all three OST-HMDs shift background color to a perceptible degree and that the degree of shift depends on the original background color. We also investigated how the degree of shift differs between different areas on the OST-HMD screens and from different measuring angles. The results showed that the background color shift depends on both the area and angle measured for some OST-HMDs.
  • Item
    A Rendering Method of Microdisplay Image to Expand Pupil Movable Region without Artifacts for Lenslet Array Near-Eye Displays
    (The Eurographics Association, 2022) Ye, Bi; Fujimoto, Yuichiro; Sawabe, Taishi; Kanbara, Masayuki; Lugtenberg, Geert; Kato, Hirokazu; Hideaki Uchiyama; Jean-Marie Normand
    Near-eye displays (NEDs) with lenslet array (LA) are a technological advancement that generates a virtual image in the observer's field of view (FOV). Although this technology is useful for designing lightweight NEDs, undesirable artifacts (i.e., cross-talk) occur when the user's pupil becomes larger than the pupil practical movable region (PPMR) or moves out of the PPMR. We proposed a rendering method for microdisplay images that takes pupil size into account and included the idea of pupil margin in the ray tracing process. Ray lights emitted by one microdisplay pixel (MP) enter the pupil and pupil margin area after passing through a number of lenses. Each lens at the MP corresponds to one virtual pixel (VP) on the virtual image plane. The weight of each VP is the intersection area between the ray light column and the pupil and pupil margin divided by the sum of intersecting spaces between all the ray light columns generated by the MP and the pupil and pupil margin. The value of each MP is determined by the number of VPs and the related weight. Through retina image simulation studies, we confirmed that the proposed rendering approach substantially enlarges PPMR to accommodate large pupil diameters and wide transition distances while reducing eye relief to an optimal (sunglasses) distance.
  • Item
    OmniTiles - A User-Customizable Display Using An Omni-Directional Camera Projector System
    (The Eurographics Association, 2022) Hoffard, Jana; Miyafuji, Shio; Pardomuan, Jefferson; Sato, Toshiki; Koike, Hideki; Hideaki Uchiyama; Jean-Marie Normand
    We present OmniTiles, a manually changeable interface that enables the user to customize their own display. This is achieved by using tiles in basic shapes that are clipped together via magnets. The created structures are then placed on top of a cameraprojector set up to track the single tiles and project onto them. The generation of different structures requires no activation mechanism or prior technical knowledge by the user. The 3D printed tiles are robust and cost-efficient, making the system particularly suited for non-experts such as families with children. First, we explain the creation process of our tiles and the implementation of the system. We then demonstrate the flexibility of our system via applications unique to our tile approach and discuss the limitations and future plans for our system.