Sketch-to-Design: Context-Based Part Assembly

dc.contributor.authorXie, Xiaohuaen_US
dc.contributor.authorXu, Kaien_US
dc.contributor.authorMitra, Niloy J.en_US
dc.contributor.authorCohen-Or, Danielen_US
dc.contributor.authorGong, Wenyongen_US
dc.contributor.authorSu, Qien_US
dc.contributor.authorChen, Baoquanen_US
dc.contributor.editorHolly Rushmeier and Oliver Deussenen_US
dc.date.accessioned2015-02-28T16:16:28Z
dc.date.available2015-02-28T16:16:28Z
dc.date.issued2013en_US
dc.description.abstractDesigning 3D objects from scratch is difficult, especially when the user intent is fuzzy and lacks a clear target form. We facilitate design by providing reference and inspiration from existing model contexts. We rethink model design as navigating through different possible combinations of part assemblies based on a large collection of pre‐segmented 3D models. We propose an interactive sketch‐to‐design system, where the user sketches prominent features of parts to combine. The sketched strokes are analysed individually, and more importantly, in context with the other parts to generate relevant shape suggestions via adesign galleryinterface. As a modelling session progresses and more parts get selected, contextual cues become increasingly dominant, and the model quickly converges to a final form. As a key enabler, we use pre‐learned part‐based contextual information to allow the user to quickly explore different combinations of parts. Our experiments demonstrate the effectiveness of our approach for efficiently designing new variations from existing shape collections.Designing 3D objects from scratch is difficult, especially when the user intent is fuzzy and lacks a clear target form. We facilitate design by providing reference and inspiration from existing model contexts. We rethink model design as navigating through different possible combinations of part assemblies based on a large collection of pre‐segmented 3D models. We propose an interactive sketch‐to‐design system, where the user sketches prominent features of parts to combine. The sketched strokes are analyzed individually, and more importantly, in context with the other parts to generate relevant shape suggestions via a design gallery interface. As a modeling session progresses and more parts get selected, contextual cues become increasingly dominant, and the model quickly converges to a final form. As a key enabler, we use pre‐learned part‐based contextual information to allow the user to quickly explore different combinations of parts.en_US
dc.description.number8
dc.description.seriesinformationComputer Graphics Forumen_US
dc.description.volume32
dc.identifier.doi10.1111/cgf.12200en_US
dc.identifier.issn1467-8659en_US
dc.identifier.urihttps://doi.org/10.1111/cgf.12200en_US
dc.publisherThe Eurographics Association and Blackwell Publishing Ltd.en_US
dc.subjectsketch‐based modellingen_US
dc.subjectassembly‐based modellingen_US
dc.subjectcontexten_US
dc.subjectinterface designen_US
dc.subject1.3.5 [Computer Graphics]en_US
dc.subjectComputational Geometry and Object Modelling—Constructive solid geometryen_US
dc.titleSketch-to-Design: Context-Based Part Assemblyen_US
Files
Collections