Browsing by Author "Kim, Min H."
Now showing 1 - 6 of 6
Results Per Page
Sort Options
Item EUROGRAPHICS 2022: CGF 41-2 Frontmatter(The Eurographics Association and John Wiley & Sons Ltd., 2022) Chaine, Raphaëlle; Kim, Min H.; Chaine, Raphaëlle; Kim, Min H.Item Modeling Surround-aware Contrast Sensitivity(The Eurographics Association, 2021) Yi, Shinyoung; Jeon, Daniel S.; Serrano, Ana; Jeong, Se-Yoon; Kim, Hui-Yong; Gutierrez, Diego; Kim, Min H.; Bousseau, Adrien and McGuire, MorganDespite advances in display technology, many existing applications rely on psychophysical datasets of human perception gathered using older, sometimes outdated displays. As a result, there exists the underlying assumption that such measurements can be carried over to the new viewing conditions of more modern technology. We have conducted a series of psychophysical experiments to explore contrast sensitivity using a state-of-the-art HDR display, taking into account not only the spatial frequency and luminance of the stimuli but also their surrounding luminance levels. From our data, we have derived a novel surroundaware contrast sensitivity function (CSF), which predicts human contrast sensitivity more accurately. We additionally provide a practical version that retains the benefits of our full model, while enabling easy backward compatibility and consistently producing good results across many existing applications that make use of CSF models. We show examples of effective HDR video compression using a transfer function derived from our CSF, tone-mapping, and improved accuracy in visual difference prediction.Item Modelling Surround‐aware Contrast Sensitivity for HDR Displays(© 2022 Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd, 2022) Yi, Shinyoung; Jeon, Daniel S.; Serrano, Ana; Jeong, Se‐Yoon; Kim, Hui‐Yong; Gutierrez, Diego; Kim, Min H.; Hauser, Helwig and Alliez, PierreDespite advances in display technology, many existing applications rely on psychophysical datasets of human perception gathered using older, sometimes outdated displays. As a result, there exists the underlying assumption that such measurements can be carried over to the new viewing conditions of more modern technology. We have conducted a series of psychophysical experiments to explore contrast sensitivity using a state‐of‐the‐art HDR display, taking into account not only the spatial frequency and luminance of the stimuli but also their surrounding luminance levels. From our data, we have derived a novel surround‐aware contrast sensitivity function (CSF), which predicts human contrast sensitivity more accurately. We additionally provide a practical version that retains the benefits of our full model, while enabling easy backward compatibility and consistently producing good results across many existing applications that make use of CSF models. We show examples of effective HDR video compression using a transfer function derived from our CSF, tone‐mapping and improved accuracy in visual difference prediction.Item Pacific Graphics 2023 - CGF 42-7: Frontmatter(The Eurographics Association and John Wiley & Sons Ltd., 2023) Chaine, Raphaëlle; Deng, Zhigang; Kim, Min H.; Chaine, Raphaëlle; Deng, Zhigang; Kim, Min H.Item Pacific Graphics 2023 - Short Papers and Posters: Frontmatter(The Eurographics Association, 2023) Chaine, Raphaëlle; Deng, Zhigang; Kim, Min H.; Chaine, Raphaëlle; Deng, Zhigang; Kim, Min H.Item Progressive Acquisition of SVBRDF and Shape in Motion(© 2020 Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd, 2020) Ha, Hyunho; Baek, Seung‐Hwan; Nam, Giljoo; Kim, Min H.; Benes, Bedrich and Hauser, HelwigTo estimate appearance parameters, traditional SVBRDF acquisition methods require multiple input images to be captured with various angles of light and camera, followed by a post‐processing step. For this reason, subjects have been limited to static scenes, or a multiview system is required to capture dynamic objects. In this paper, we propose a simultaneous acquisition method of SVBRDF and shape allowing us to capture the material appearance of deformable objects in motion using a single RGBD camera. To do so, we progressively integrate photometric samples of surfaces in motion in a volumetric data structure with a deformation graph. Then, building upon recent advances of fusion‐based methods, we estimate SVBRDF parameters in motion. We make use of a conventional RGBD camera that consists of the colour and infrared cameras with active infrared illumination. The colour camera is used for capturing diffuse properties, and the infrared camera‐illumination module is employed for estimating specular properties by means of active illumination. Our joint optimization yields complete material appearance parameters. We demonstrate the effectiveness of our method with extensive evaluation on both synthetic and real data that include various deformable objects of specular and diffuse appearance.