37-Issue 1
Permanent URI for this collection
Browse
Browsing 37-Issue 1 by Subject "cross‐modal"
Now showing 1 - 2 of 2
Results Per Page
Sort Options
Item Audiovisual Resource Allocation for Bimodal Virtual Environments(© 2018 The Eurographics Association and John Wiley & Sons Ltd., 2018) Doukakis, E.; Debattista, K.; Harvey, C.; Bashford‐Rogers, T.; Chalmers, A.; Chen, Min and Benes, BedrichFidelity is of key importance if virtual environments are to be used as authentic representations of real environments. However, simulating the multitude of senses that comprise the human sensory system is computationally challenging. With limited computational resources, it is essential to distribute these carefully in order to simulate the most ideal perceptual experience. This paper investigates this balance of resources across multiple scenarios where combined audiovisual stimulation is delivered to the user. A subjective experiment was undertaken where participants (N=35) allocated five fixed resource budgets across graphics and acoustic stimuli. In the experiment, increasing the quality of one of the stimuli decreased the quality of the other. Findings demonstrate that participants allocate more resources to graphics; however, as the computational budget is increased, an approximately balanced distribution of resources is preferred between graphics and acoustics. Based on the results, an audiovisual quality prediction model is proposed and successfully validated against previously untested budgets and an untested scenario.Fidelity is of key importance if virtual environments are to be used as authentic representations of real environments. However, simulating the multitude of senses that comprise the human sensory system is computationally challenging. With limited computational resources, it is essential to distribute these carefully in order to simulate the most ideal perceptual experience. This paper investigates this balance of resources across multiple scenarios where combined audiovisual stimulation is delivered to the user. A subjective experiment was undertaken where participants (N=35) allocated five fixed resource budgets across graphics and acoustic stimuli.Item Olfaction and Selective Rendering(© 2018 The Eurographics Association and John Wiley & Sons Ltd., 2018) Harvey, Carlo; Bashford‐Rogers, Thomas; Debattista, Kurt; Doukakis, Efstratios; Chalmers, Alan; Chen, Min and Benes, BedrichAccurate simulation of all the senses in virtual environments is a computationally expensive task. Visual saliency models have been used to improve computational performance for rendered content, but this is insufficient for multi‐modal environments. This paper considers cross‐modal perception and, in particular, if and how olfaction affects visual attention. Two experiments are presented in this paper. Firstly, eye tracking is gathered from a number of participants to gain an impression about where and how they view virtual objects when smell is introduced compared to an odourless condition. Based on the results of this experiment a new type of saliency map in a selective‐rendering pipeline is presented. A second experiment validates this approach, and demonstrates that participants rank images as better quality, when compared to a reference, for the same rendering budget.Accurate simulation of all the senses in virtual environments is a computationally expensive task. Visual saliency models have been used to improve computational performance for rendered content, but this is insufficient for multi‐modal environments. This paper considers cross‐modal perception and, in particular, if and how olfaction affects visual attention. Two experiments are presented in this paper. Firstly, eye tracking is gathered from a number of participants to gain an impression about where and how they view virtual objects when smell is introduced compared to an odourless condition.