Synthesizing Novel Views from Unregistered 2-D Images

dc.contributor.authorHavaldar, Paragen_US
dc.contributor.authorLee, Mi-Suenen_US
dc.contributor.authorMedioni, Gerarden_US
dc.date.accessioned2015-01-27T11:53:44Z
dc.date.available2015-01-27T11:53:44Z
dc.date.issued1997en_US
dc.description.abstractSynthesizing the image of a 3-D scene as it would be captured by a camera from an arbitrary viewpoint is a central problem in Computer Graphics. Given a complete 3-D model, it is possible to render the scene from any viewpoint. The construction of models is a tedious task. Here, we propose to bypass the model construction phase altogether, and to generate images of a 3-D scene from any novel viewpoint from prestored images. Unlike methods presented so far, we propose to completely avoid inferring and reasoning in 3-D by using projective invariants. These invariants are derived from corresponding points in the prestored images. The correspondences between features are established off-line in a semi-automated way. It is then possible to generate wireframe animation in real time on a standard computing platform. Well understood texture mapping methods can be applied to the wireframes to realistically render new images from the prestored ones. The method proposed here should allow the integration of computer generated and real imagery for applications such as walkthroughs in realistic virtual environments. We illustrate our approach on synthetic and real indoor and outdoor images.en_US
dc.description.number1en_US
dc.description.seriesinformationComputer Graphics Forumen_US
dc.description.volume16en_US
dc.identifier.doi10.1111/1467-8659.117en_US
dc.identifier.issn1467-8659en_US
dc.identifier.pages65-73en_US
dc.identifier.urihttps://doi.org/10.1111/1467-8659.117en_US
dc.publisherBlackwell Publishers Ltd and the Eurographics Associationen_US
dc.titleSynthesizing Novel Views from Unregistered 2-D Imagesen_US
Files
Collections