Edit Propagation using Geometric Analogies
No Thumbnail Available
Date
2014-09-19
Authors
Journal Title
Journal ISSN
Volume Title
Publisher
Abstract
Modeling complex geometrical shapes, like city scenes or terrains with dense vegetation, is a
time-consuming task that cannot be automated trivially. The problem of creating and editing
many similar, but not identical models requires specialized methods that understand what makes
these objects similar in order to either create new variations of these models from scratch or to
propagate edit operations from one object to all similar objects. In this thesis, we present new
methods to significantly reduce the effort required to model complex scenes.
For 2D scenes containing deformable objects, such as fish or snakes, we present a method to
find partial matches between deformed shapes that can be used to transfer localized properties such
as texture between matching shapes. Shapes are considered similar if they are related by pointwise
correspondences and if neighboring points have correspondences with similar transformation
parameters. Unlike previous work, this approach allows us to successfully establish matches
between strongly deformed objects, even in the presence of occlusions and sparse or unevenly
distributed sets of matching features.
For scenes consisting of 2D shape arrangements, such as floor plans, we propose methods
to find similar locations in the arrangements, even though the arrangements themselves are
dissimilar. Edit operations, such as object placements, can be propagated between similar
locations. Our approach is based on simple geometric relationships between the location and the
shape arrangement, such as the distance of the location to a shape boundary or the direction to
the closest shape corner. Two locations are similar of they have many similar relations to their
surrounding shape arrangement. To the best of our knowledge, there is no method that explicitly
attempts to find similar locations in dissimilar shape arrangements. We demonstrate populating
large scenes such as floor plans with hundreds of objects like pieces of furniture, using relatively
few edit operations.
Additionally, we show that providing several examples of an edit operation helps narrowing
down the supposed modeling intention of the user and improves the quality of the edit propagation.
A probabilistic model is learned from the examples and used to suggest similar edit operations.
Also, extensions are shown that allow application of this method in 3D scenes. Compared to
previous approaches that use entire scenes as examples, our method provides more user control
and has no need for large databases of example scenes or domain-specific knowledge. We
demonstrate generating 3D interior decoration and complex city scenes, including buildings with
detailed facades, using only few edit operations.
Description