Single Image Surface Appearance Modeling with Self-augmented CNNs and Inexact Supervision

dc.contributor.authorYe, Wenjieen_US
dc.contributor.authorLi, Xiaoen_US
dc.contributor.authorDong, Yueen_US
dc.contributor.authorPeers, Pieteren_US
dc.contributor.authorTong, Xinen_US
dc.contributor.editorFu, Hongbo and Ghosh, Abhijeet and Kopf, Johannesen_US
dc.date.accessioned2018-10-07T14:59:16Z
dc.date.available2018-10-07T14:59:16Z
dc.date.issued2018
dc.description.abstractThis paper presents a deep learning based method for estimating the spatially varying surface reflectance properties from a single image of a planar surface under unknown natural lighting trained using only photographs of exemplar materials without referencing any artist generated or densely measured spatially varying surface reflectance training data. Our method is based on an empirical study of Li et al.'s [LDPT17] self-augmentation training strategy that shows that the main role of the initial approximative network is to provide guidance on the inherent ambiguities in single image appearance estimation. Furthermore, our study indicates that this initial network can be inexact (i.e., trained from other data sources) as long as it resolves the inherent ambiguities. We show that the single image estimation network trained without manually labeled data outperforms prior work in terms of accuracy as well as generality.en_US
dc.description.number7
dc.description.sectionheadersAppearance and Illumination
dc.description.seriesinformationComputer Graphics Forum
dc.description.volume37
dc.identifier.doi10.1111/cgf.13560
dc.identifier.issn1467-8659
dc.identifier.pages201-211
dc.identifier.urihttps://doi.org/10.1111/cgf.13560
dc.identifier.urihttps://diglib.eg.org:443/handle/10.1111/cgf13560
dc.publisherThe Eurographics Association and John Wiley & Sons Ltd.en_US
dc.subjectComputing methodologies
dc.subjectReflectance modeling
dc.titleSingle Image Surface Appearance Modeling with Self-augmented CNNs and Inexact Supervisionen_US
Files
Collections