Multi-Modal Perception for Selective Rendering
dc.contributor.author | Harvey, Carlo | en_US |
dc.contributor.author | Debattista, Kurt | en_US |
dc.contributor.author | Bashford-Rogers, Thomas | en_US |
dc.contributor.author | Chalmers, Alan | en_US |
dc.contributor.editor | Chen, Min and Zhang, Hao (Richard) | en_US |
dc.date.accessioned | 2017-03-13T18:13:02Z | |
dc.date.available | 2017-03-13T18:13:02Z | |
dc.date.issued | 2017 | |
dc.description.abstract | A major challenge in generating high‐fidelity virtual environments (VEs) is to be able to provide realism at interactive rates. The high‐fidelity simulation of light and sound is still unachievable in real time as such physical accuracy is very computationally demanding. Only recently has visual perception been used in high‐fidelity rendering to improve performance by a series of novel exploitations; to render parts of the scene that are not currently being attended to by the viewer at a much lower quality without the difference being perceived. This paper investigates the effect spatialized directional sound has on the visual attention of a user towards rendered images. These perceptual artefacts are utilized in selective rendering pipelines via the use of multi‐modal maps. The multi‐modal maps are tested through psychophysical experiments to examine their applicability to selective rendering algorithms, with a series of fixed cost rendering functions, and are found to perform significantly better than only using image saliency maps that are naively applied to multi‐modal VEs.A major challenge in generating high‐fidelity virtual environments (VEs) is to be able to provide realism at interactive rates. The high‐fidelity simulation of light and sound is still unachievable in real time as such physical accuracy is very computationally demanding. Only recently has visual perception been used in high‐fidelity rendering to improve performance by a series of novel exploitations; to render parts of the scene that are not currently being attended to by the viewer at a much lower quality without the difference being perceived. This paper investigates the effect spatialized directional sound has on the visual attention of a user towards rendered images. These perceptual artefacts are utilized in selective rendering pipelines via the use of multi‐modal maps. | en_US |
dc.description.number | 1 | |
dc.description.sectionheaders | Articles | |
dc.description.seriesinformation | Computer Graphics Forum | |
dc.description.volume | 36 | |
dc.identifier.doi | 10.1111/cgf.12793 | |
dc.identifier.issn | 1467-8659 | |
dc.identifier.uri | https://doi.org/10.1111/cgf.12793 | |
dc.identifier.uri | https://diglib.eg.org:443/handle/10.1111/cgf12793 | |
dc.publisher | © 2017 The Eurographics Association and John Wiley & Sons Ltd. | en_US |
dc.subject | multi-modal | |
dc.subject | cross-modal | |
dc.subject | saliency | |
dc.subject | sound | |
dc.subject | graphics | |
dc.subject | selective rendering | |
dc.subject | I.3.3 [Computer Graphics]: Picture/Image Generation—Viewing Algorithms | |
dc.subject | I.4.8 [Computer Graphics]: Image Processing and Computer Vision—Scene Analysis - Object Recognition | |
dc.subject | I.4.8 [Computer Graphics]: Image Processing and Computer Vision—Scene Analysis - Tracking | |
dc.title | Multi-Modal Perception for Selective Rendering | en_US |