PG2022 Short Papers, Posters, and Work-in-Progress Papers

Permanent URI for this collection

Pacific Graphics 2022 - Short Papers, Posters, and Work-in-Progress Papers
Kyoto, Japan | October 5 – 8, 2022

(for Full Papers (CGF) see PG 2022 - CGF 41-7)
Sketch and Modeling
Learning a Style Space for Interactive Line Drawing Synthesis from Animated 3D Models
Zeyu Wang, Tuanfeng Y. Wang, and Julie Dorsey
Multi-instance Referring Image Segmentation of Scene Sketches based on Global Reference Mechanism
Peng Ling, Haoran Mo, and Chengying Gao
Human Face Modeling based on Deep Learning through Line-drawing
Yuta Kawanaka, Syuhei Sato, Kaisei Sakurai, Shangce Gao, and Zheng Tang
An Interactive Modeling System of Japanese Castles with Decorative Objects
Shogo Umeyama and Yoshinori Dobashi
Interactive Deformable Image Registration with Dual Cursor
Takeo Igarashi, Tsukasa Koike, and Taichi Kin
Fast Geometric Computation
Intersection Distance Field Collision for GPU
Bastian Krayer, Rebekka Görge, and Stefan Müller
Reconstructing Bounding Volume Hierarchies from Memory Traces of Ray Tracers
Max von Buelow, Tobias Stensbeck, Volker Knauthe, Stefan Guthe, and Dieter W. Fellner
Rendering - Sampling
Improving View Independent Rendering for Multiview Effects
Ajinkya Gavane and Benjamin Watson
Image Enhancement
Adaptive and Dynamic Regularization for Rolling Guidance Image Filtering
Miku Fukatsu, Shin Yoshizawa, Hiroshi Takemura, and Hideo Yokota
Image Restoration
Shadow Removal via Cascade Large Mask Inpainting
Juwan Kim, Seung-Heon Kim, and Insung Jang
Perception and Visualization
Aesthetic Enhancement via Color Area and Location Awareness
Bailin Yang, Qingxu Wang, Frederick W. B. Li, Xiaohui Liang, Tianxiang Wei, and Changrui Zhu
DARC: A Visual Analytics System for Multivariate Applicant Data Aggregation, Reasoning and Comparison
Yihan Hou, Yu Liu, He Wang, Zhichao Zhang, Yue Li, Hai-Ning Liang, and Lingyun Yu
Digital Human
DFGA: Digital Human Faces Generation and Animation from the RGB Video using Modern Deep Learning Technology
Diqiong Jiang, Lihua You, Jian Chang, and Ruofeng Tong

BibTeX (PG2022 Short Papers, Posters, and Work-in-Progress Papers)
@inproceedings{
10.2312:pg.20222019,
booktitle = {
Pacific Graphics Short Papers, Posters, and Work-in-Progress Papers},
editor = {
Yang, Yin
 and
Parakkat, Amal D.
 and
Deng, Bailin
 and
Noh, Seung-Tak
}, title = {{
Pacific Graphics 2022 - Short Papers, Posters, and Work-in-Progress Papers: Frontmatter}},
author = {
Yang, Yin
 and
Parakkat, Amal D.
 and
Deng, Bailin
 and
Noh, Seung-Tak
}, year = {
2022},
publisher = {
The Eurographics Association},
ISBN = {978-3-03868-190-8},
DOI = {
10.2312/pg.20222019}
}
@inproceedings{
10.2312:pg.20221237,
booktitle = {
Pacific Graphics Short Papers, Posters, and Work-in-Progress Papers},
editor = {
Yang, Yin
 and
Parakkat, Amal D.
 and
Deng, Bailin
 and
Noh, Seung-Tak
}, title = {{
Learning a Style Space for Interactive Line Drawing Synthesis from Animated 3D Models}},
author = {
Wang, Zeyu
 and
Wang, Tuanfeng Y.
 and
Dorsey, Julie
}, year = {
2022},
publisher = {
The Eurographics Association},
ISBN = {978-3-03868-190-8},
DOI = {
10.2312/pg.20221237}
}
@inproceedings{
10.2312:pg.20221238,
booktitle = {
Pacific Graphics Short Papers, Posters, and Work-in-Progress Papers},
editor = {
Yang, Yin
 and
Parakkat, Amal D.
 and
Deng, Bailin
 and
Noh, Seung-Tak
}, title = {{
Multi-instance Referring Image Segmentation of Scene Sketches based on Global Reference Mechanism}},
author = {
Ling, Peng
 and
Mo, Haoran
 and
Gao, Chengying
}, year = {
2022},
publisher = {
The Eurographics Association},
ISBN = {978-3-03868-190-8},
DOI = {
10.2312/pg.20221238}
}
@inproceedings{
10.2312:pg.20221239,
booktitle = {
Pacific Graphics Short Papers, Posters, and Work-in-Progress Papers},
editor = {
Yang, Yin
 and
Parakkat, Amal D.
 and
Deng, Bailin
 and
Noh, Seung-Tak
}, title = {{
Human Face Modeling based on Deep Learning through Line-drawing}},
author = {
Kawanaka, Yuta
 and
Sato, Syuhei
 and
Sakurai, Kaisei
 and
Gao, Shangce
 and
Tang, Zheng
}, year = {
2022},
publisher = {
The Eurographics Association},
ISBN = {978-3-03868-190-8},
DOI = {
10.2312/pg.20221239}
}
@inproceedings{
10.2312:pg.20221240,
booktitle = {
Pacific Graphics Short Papers, Posters, and Work-in-Progress Papers},
editor = {
Yang, Yin
 and
Parakkat, Amal D.
 and
Deng, Bailin
 and
Noh, Seung-Tak
}, title = {{
An Interactive Modeling System of Japanese Castles with Decorative Objects}},
author = {
Umeyama, Shogo
 and
Dobashi, Yoshinori
}, year = {
2022},
publisher = {
The Eurographics Association},
ISBN = {978-3-03868-190-8},
DOI = {
10.2312/pg.20221240}
}
@inproceedings{
10.2312:pg.20221241,
booktitle = {
Pacific Graphics Short Papers, Posters, and Work-in-Progress Papers},
editor = {
Yang, Yin
 and
Parakkat, Amal D.
 and
Deng, Bailin
 and
Noh, Seung-Tak
}, title = {{
Interactive Deformable Image Registration with Dual Cursor}},
author = {
Igarashi, Takeo
 and
Koike, Tsukasa
 and
Kin, Taichi
}, year = {
2022},
publisher = {
The Eurographics Association},
ISBN = {978-3-03868-190-8},
DOI = {
10.2312/pg.20221241}
}
@inproceedings{
10.2312:pg.20221242,
booktitle = {
Pacific Graphics Short Papers, Posters, and Work-in-Progress Papers},
editor = {
Yang, Yin
 and
Parakkat, Amal D.
 and
Deng, Bailin
 and
Noh, Seung-Tak
}, title = {{
Intersection Distance Field Collision for GPU}},
author = {
Krayer, Bastian
 and
Görge, Rebekka
 and
Müller, Stefan
}, year = {
2022},
publisher = {
The Eurographics Association},
ISBN = {978-3-03868-190-8},
DOI = {
10.2312/pg.20221242}
}
@inproceedings{
10.2312:pg.20221243,
booktitle = {
Pacific Graphics Short Papers, Posters, and Work-in-Progress Papers},
editor = {
Yang, Yin
 and
Parakkat, Amal D.
 and
Deng, Bailin
 and
Noh, Seung-Tak
}, title = {{
Reconstructing Bounding Volume Hierarchies from Memory Traces of Ray Tracers}},
author = {
Buelow, Max von
 and
Stensbeck, Tobias
 and
Knauthe, Volker
 and
Guthe, Stefan
 and
Fellner, Dieter W.
}, year = {
2022},
publisher = {
The Eurographics Association},
ISBN = {978-3-03868-190-8},
DOI = {
10.2312/pg.20221243}
}
@inproceedings{
10.2312:pg.20221244,
booktitle = {
Pacific Graphics Short Papers, Posters, and Work-in-Progress Papers},
editor = {
Yang, Yin
 and
Parakkat, Amal D.
 and
Deng, Bailin
 and
Noh, Seung-Tak
}, title = {{
Improving View Independent Rendering for Multiview Effects}},
author = {
Gavane, Ajinkya
 and
Watson, Benjamin
}, year = {
2022},
publisher = {
The Eurographics Association},
ISBN = {978-3-03868-190-8},
DOI = {
10.2312/pg.20221244}
}
@inproceedings{
10.2312:pg.20221245,
booktitle = {
Pacific Graphics Short Papers, Posters, and Work-in-Progress Papers},
editor = {
Yang, Yin
 and
Parakkat, Amal D.
 and
Deng, Bailin
 and
Noh, Seung-Tak
}, title = {{
Adaptive and Dynamic Regularization for Rolling Guidance Image Filtering}},
author = {
Fukatsu, Miku
 and
Yoshizawa, Shin
 and
Takemura, Hiroshi
 and
Yokota, Hideo
}, year = {
2022},
publisher = {
The Eurographics Association},
ISBN = {978-3-03868-190-8},
DOI = {
10.2312/pg.20221245}
}
@inproceedings{
10.2312:pg.20221246,
booktitle = {
Pacific Graphics Short Papers, Posters, and Work-in-Progress Papers},
editor = {
Yang, Yin
 and
Parakkat, Amal D.
 and
Deng, Bailin
 and
Noh, Seung-Tak
}, title = {{
Shadow Removal via Cascade Large Mask Inpainting}},
author = {
Kim, Juwan
 and
Kim, Seung-Heon
 and
Jang, Insung
}, year = {
2022},
publisher = {
The Eurographics Association},
ISBN = {978-3-03868-190-8},
DOI = {
10.2312/pg.20221246}
}
@inproceedings{
10.2312:pg.20221247,
booktitle = {
Pacific Graphics Short Papers, Posters, and Work-in-Progress Papers},
editor = {
Yang, Yin
 and
Parakkat, Amal D.
 and
Deng, Bailin
 and
Noh, Seung-Tak
}, title = {{
Aesthetic Enhancement via Color Area and Location Awareness}},
author = {
Yang, Bailin
 and
Wang, Qingxu
 and
Li, Frederick W. B.
 and
Liang, Xiaohui
 and
Wei, Tianxiang
 and
Zhu, Changrui
}, year = {
2022},
publisher = {
The Eurographics Association},
ISBN = {978-3-03868-190-8},
DOI = {
10.2312/pg.20221247}
}
@inproceedings{
10.2312:pg.20221248,
booktitle = {
Pacific Graphics Short Papers, Posters, and Work-in-Progress Papers},
editor = {
Yang, Yin
 and
Parakkat, Amal D.
 and
Deng, Bailin
 and
Noh, Seung-Tak
}, title = {{
DARC: A Visual Analytics System for Multivariate Applicant Data Aggregation, Reasoning and Comparison}},
author = {
Hou, Yihan
 and
Liu, Yu
 and
Wang, He
 and
Zhang, Zhichao
 and
Li, Yue
 and
Liang, Hai-Ning
 and
Yu, Lingyun
}, year = {
2022},
publisher = {
The Eurographics Association},
ISBN = {978-3-03868-190-8},
DOI = {
10.2312/pg.20221248}
}
@inproceedings{
10.2312:pg.20221249,
booktitle = {
Pacific Graphics Short Papers, Posters, and Work-in-Progress Papers},
editor = {
Yang, Yin
 and
Parakkat, Amal D.
 and
Deng, Bailin
 and
Noh, Seung-Tak
}, title = {{
DFGA: Digital Human Faces Generation and Animation from the RGB Video using Modern Deep Learning Technology}},
author = {
Jiang, Diqiong
 and
You, Lihua
 and
Chang, Jian
 and
Tong, Ruofeng
}, year = {
2022},
publisher = {
The Eurographics Association},
ISBN = {978-3-03868-190-8},
DOI = {
10.2312/pg.20221249}
}

Browse

Recent Submissions

Now showing 1 - 14 of 14
  • Item
    Pacific Graphics 2022 - Short Papers, Posters, and Work-in-Progress Papers: Frontmatter
    (The Eurographics Association, 2022) Yang, Yin; Parakkat, Amal D.; Deng, Bailin; Noh, Seung-Tak; Yang, Yin; Parakkat, Amal D.; Deng, Bailin; Noh, Seung-Tak
  • Item
    Learning a Style Space for Interactive Line Drawing Synthesis from Animated 3D Models
    (The Eurographics Association, 2022) Wang, Zeyu; Wang, Tuanfeng Y.; Dorsey, Julie; Yang, Yin; Parakkat, Amal D.; Deng, Bailin; Noh, Seung-Tak
    Most non-photorealistic rendering (NPR) methods for line drawing synthesis operate on a static shape. They are not tailored to process animated 3D models due to extensive per-frame parameter tuning needed to achieve the intended look and natural transition. This paper introduces a framework for interactive line drawing synthesis from animated 3D models based on a learned style space for drawing representation and interpolation. We refer to style as the relationship between stroke placement in a line drawing and its corresponding geometric properties. Starting from a given sequence of an animated 3D character, a user creates drawings for a set of keyframes. Our system embeds the raster drawings into a latent style space after they are disentangled from the underlying geometry. By traversing the latent space, our system enables a smooth transition between the input keyframes. The user may also edit, add, or remove the keyframes interactively, similar to a typical keyframe-based workflow. We implement our system with deep neural networks trained on synthetic line drawings produced by a combination of NPR methods. Our drawing-specific supervision and optimization-based embedding mechanism allow generalization from NPR line drawings to user-created drawings during run time. Experiments show that our approach generates high-quality line drawing animations while allowing interactive control of the drawing style across frames.
  • Item
    Multi-instance Referring Image Segmentation of Scene Sketches based on Global Reference Mechanism
    (The Eurographics Association, 2022) Ling, Peng; Mo, Haoran; Gao, Chengying; Yang, Yin; Parakkat, Amal D.; Deng, Bailin; Noh, Seung-Tak
    Scene sketch segmentation based on referring expression plays an important role in sketch editing of anime industry. While most existing referring image segmentation approaches are designed for the standard task of generating a binary segmentation mask for a single or a group of target(s), we think it necessary to equip these models with the ability of multi-instance segmentation. To this end, we propose GRM-Net, a one-stage framework tailored for multi-instance referring image segmentation of scene sketches. We extract the language features from the expression and fuse it into a conventional instance segmentation pipeline for filtering out the undesired instances in a coarse-to-fine manner and keeping the matched ones. To model the relative arrangement of the objects and the relationship among them from a global view, we propose a global reference mechanism (GRM) to assign references to each detected candidate to identify its position. We compare with existing methods designed for multi-instance referring image segmentation of scene sketches and for the standard task of referring image segmentation, and the results demonstrate the effectiveness and superiority of our approach.
  • Item
    Human Face Modeling based on Deep Learning through Line-drawing
    (The Eurographics Association, 2022) Kawanaka, Yuta; Sato, Syuhei; Sakurai, Kaisei; Gao, Shangce; Tang, Zheng; Yang, Yin; Parakkat, Amal D.; Deng, Bailin; Noh, Seung-Tak
    This paper presents a deep learning-based method for creating 3D human face models. In recent years, several sketch-based shape modeling methods have been proposed. These methods allow the user to easily model various shapes containing animal, building, vehicle, and so on. However, a few methods have been proposed for human face models. If we can create 3D human face models via line-drawing, models of cartoon or fantasy characters can be easily created. To achieve this, we propose a sketch-based face modeling method. When a single line-drawing image is input to our system, a corresponding 3D face model are generated. Our system is based on a deep learning; many human face models and corresponding images rendered as line-drawing are prepared, and then a network is trained using these datasets. For the network, we use a previous method for reconstructing human bodies from real images, and we propose some extensions to enhance learning accuracy. Several examples are shown to demonstrate usefulness of our system.
  • Item
    An Interactive Modeling System of Japanese Castles with Decorative Objects
    (The Eurographics Association, 2022) Umeyama, Shogo; Dobashi, Yoshinori; Yang, Yin; Parakkat, Amal D.; Deng, Bailin; Noh, Seung-Tak
    We present an interactive modeling system for Japanese castles. We develop an user interface that can generate the fundamental structure of the castle tower consisting of stone walls, turrets, and roofs. By clicking on the screen with a mouse, relevant parameters for the fundamental structure are automatically calculated to generate 3D models of Japanese-style castles. We use characteristic curves that often appear in ancient Japanese architecture for the realistic modeling of the castles.
  • Item
    Interactive Deformable Image Registration with Dual Cursor
    (The Eurographics Association, 2022) Igarashi, Takeo; Koike, Tsukasa; Kin, Taichi; Yang, Yin; Parakkat, Amal D.; Deng, Bailin; Noh, Seung-Tak
    Deformable image registration is the process of deforming a target image to match corresponding features of a reference image. Fully automatic registration remains difficult; thus, manual registration is dominant in practice. In manual registration, an expert user specifies a set of paired landmarks on the two images; subsequently, the system deforms the target image to match each landmark with its counterpart as a batch process. However, the deformation results are difficult for the user to predict, and moving the cursor back and forth between the two images is time-consuming. To improve the efficiency of this manual process, we propose an interactive method wherein the deformation results are continuously displayed as the user clicks and drags each landmark. Additionally, the system displays two cursors, one on the target image and the other on the reference image, to reduce the amount of mouse movement required. The results of a user study reveal that the proposed interactive method achieves higher accuracy and faster task completion compared to traditional batch landmark placement.
  • Item
    Intersection Distance Field Collision for GPU
    (The Eurographics Association, 2022) Krayer, Bastian; Görge, Rebekka; Müller, Stefan; Yang, Yin; Parakkat, Amal D.; Deng, Bailin; Noh, Seung-Tak
    We present a framework for finding collision points between objects represented by signed distance fields. Particles are used to sample the region where intersections can occur. The distance field representation is used to project the particles onto the surface of the intersection of both objects. From there information, such as collision normals and intersection depth can be extracted. This allows for handling various types of objects in a unified way. Due to the particle approach, the algorithm is well suited to the GPU.
  • Item
    Reconstructing Bounding Volume Hierarchies from Memory Traces of Ray Tracers
    (The Eurographics Association, 2022) Buelow, Max von; Stensbeck, Tobias; Knauthe, Volker; Guthe, Stefan; Fellner, Dieter W.; Yang, Yin; Parakkat, Amal D.; Deng, Bailin; Noh, Seung-Tak
    The ongoing race to improve computer graphics leads to more complex GPU hardware and ray tracing techniques whose internal functionality is sometimes hidden to the user. Bounding volume hierarchies and their construction are an important performance aspect of such ray tracing implementations. We propose a novel approach that utilizes binary instrumentation to collect memory traces and then uses them to extract the bounding volume hierarchy (BVH) by analyzing access patters. Our reconstruction allows combining memory traces captured from multiple ray tracing views independently, increasing the reconstruction result. It reaches accuracies of 30% to 45% when comparing against the ground-truth BVH used for ray tracing a single view on a simple scene with one object. With multiple views it is even possible to reconstruct the whole BVH, while we already achieve 98% with just seven views. Because our approach is largely independent of the data structures used internally, these accurate reconstructions serve as a first step into estimation of unknown construction techniques of ray tracing implementations.
  • Item
    Improving View Independent Rendering for Multiview Effects
    (The Eurographics Association, 2022) Gavane, Ajinkya; Watson, Benjamin; Yang, Yin; Parakkat, Amal D.; Deng, Bailin; Noh, Seung-Tak
    This paper describes improvements to view independent rendering (VIR) that make it much more useful for multiview effects. Improved VIR's (iVIR's) soft shadows are nearly identical in quality to VIR's and produced with comparable speed (several times faster than multipass rendering), even when using a simpler bufferless implementation that does not risk overflow. iVIR's omnidirectional shadow results are still better, often nearly twice as fast as VIR's, even when bufferless. Most impressively, iVIR enables complex environment mapping in real time, producing high-quality reflections up to an order of magnitude faster than VIR, and 2-4 times faster than multipass rendering.
  • Item
    Adaptive and Dynamic Regularization for Rolling Guidance Image Filtering
    (The Eurographics Association, 2022) Fukatsu, Miku; Yoshizawa, Shin; Takemura, Hiroshi; Yokota, Hideo; Yang, Yin; Parakkat, Amal D.; Deng, Bailin; Noh, Seung-Tak
    Separating shapes and textures of digital images at different scales is useful in computer graphics. The Rolling Guidance (RG) filter, which removes structures smaller than a specified scale while preserving salient edges, has attracted considerable attention. Conventional RG-based filters have some drawbacks, including smoothness/sharpness quality dependence on scale and non-uniform convergence. This paper proposes a novel RG-based image filter that has more stable filtering quality at varying scales. Our filtering approach is an adaptive and dynamic regularization for a recursive regression model in the RG framework to produce more edge saliency and appropriate scale convergence. Our numerical experiments demonstrated filtering results with uniform convergence and high accuracy for varying scales.
  • Item
    Shadow Removal via Cascade Large Mask Inpainting
    (The Eurographics Association, 2022) Kim, Juwan; Kim, Seung-Heon; Jang, Insung; Yang, Yin; Parakkat, Amal D.; Deng, Bailin; Noh, Seung-Tak
    We present a novel shadow removal framework based on the image inpainting approach. The proposed method consists of two cascade Large-Mask inpainting(LaMa) networks for shadow inpainting and edge inpainting. Experiments with the ISTD and adjusted ISTD dataset show that our method achieves competitive shadow removal results compared to state-of-the methods. And we also show that shadows are well removed from complex and large shadow images, such as urban aerial images.
  • Item
    Aesthetic Enhancement via Color Area and Location Awareness
    (The Eurographics Association, 2022) Yang, Bailin; Wang, Qingxu; Li, Frederick W. B.; Liang, Xiaohui; Wei, Tianxiang; Zhu, Changrui; Yang, Yin; Parakkat, Amal D.; Deng, Bailin; Noh, Seung-Tak
    Choosing a suitable color palette can typically improve image aesthetic, where a naive way is choosing harmonious colors from some pre-defined color combinations in color wheels. However, color palettes only consider the usage of color types without specifying their amount in an image. Also, it is still challenging to automatically assign individual palette colors to suitable image regions for maximizing image aesthetic quality. Motivated by these, we propose to construct a contribution-aware color palette from images with high aesthetic quality, enabling color transfer by matching the coloring and regional characteristics of an input image. We hence exploit public image datasets, extracting color composition and embedded color contribution features from aesthetic images to generate our proposed color palettes. We consider both image area ratio and image location as the color contribution features to extract. We have conducted quantitative experiments to demonstrate that our method outperforms existing methods through SSIM (Structural SIMilarity) and PSNR (Peak Signal to Noise Ratio) for objective image quality measurement and no-reference image assessment (NIMA) for image aesthetic scoring.
  • Item
    DARC: A Visual Analytics System for Multivariate Applicant Data Aggregation, Reasoning and Comparison
    (The Eurographics Association, 2022) Hou, Yihan; Liu, Yu; Wang, He; Zhang, Zhichao; Li, Yue; Liang, Hai-Ning; Yu, Lingyun; Yang, Yin; Parakkat, Amal D.; Deng, Bailin; Noh, Seung-Tak
    People often make decisions based on their comprehensive understanding of various materials, judgement of reasons, and comparison among choices. For instance, when hiring committees review multivariate applicant data, they need to consider and compare different aspects of the applicants' materials. However, the amount and complexity of multivariate data increase the difficulty to analyze the data, extract the most salient information, and then rapidly form opinions based on the extracted information. Thus, a fast and comprehensive understanding of multivariate data sets is a pressing need in many fields, such as business and education. In this work, we had in-depth interviews with stakeholders and characterized user requirements involved in data-driven decision making in reviewing school applications. Based on these requirements, we propose DARC, a visual analytics system for facilitating decision making on multivariate applicant data. Through the system, users are supported to gain insights of the multivariate data, picture an overview of all data cases, and retrieve original data in a quick and intuitive manner. The effectiveness of DARC is validated through observational user evaluations and interviews.
  • Item
    DFGA: Digital Human Faces Generation and Animation from the RGB Video using Modern Deep Learning Technology
    (The Eurographics Association, 2022) Jiang, Diqiong; You, Lihua; Chang, Jian; Tong, Ruofeng; Yang, Yin; Parakkat, Amal D.; Deng, Bailin; Noh, Seung-Tak
    High-quality and personalized digital human faces have been widely used in media and entertainment, from film and game production to virtual reality. However, the existing technology of generating digital faces requires extremely intensive labor, which prevents the large-scale popularization of digital face technology. In order to tackle this problem, the proposed research will investigate deep learning-based facial modeling and animation technologies to 1) create personalized face geometry from a single image, including the recognizable neutral face shape and believable personalized blendshapes; (2) generate personalized production-level facial skin textures from a video or image sequence; (3) automatically drive and animate a 3D target avatar by an actor's 2D facial video or audio. Our innovation is to achieve these tasks both efficiently and precisely by using the end-to-end framework with modern deep learning technology (StyleGAN, Transformer, NeRF).