Attention And Positional Encoding Are (Almost) All You Need For Shape Matching

Loading...
Thumbnail Image
Date
2023
Journal Title
Journal ISSN
Volume Title
Publisher
The Eurographics Association and John Wiley & Sons Ltd.
Abstract
The fast development of novel approaches derived from the Transformers architecture has led to outstanding performance in different scenarios, from Natural Language Processing to Computer Vision. Recently, they achieved impressive results even in the challenging task of non-rigid shape matching. However, little is known about the capability of the Transformer-encoder architecture for the shape matching task, and its performances still remained largely unexplored. In this paper, we step back and investigate the contribution made by the Transformer-encoder architecture compared to its more recent alternatives, focusing on why and how it works on this specific task. Thanks to the versatility of our implementation, we can harness the bi-directional structure of the correspondence problem, making it more interpretable. Furthermore, we prove that positional encodings are essential for processing unordered point clouds. Through a comprehensive set of experiments, we find that attention and positional encoding are (almost) all you need for shape matching. The simple Transformer-encoder architecture, coupled with relative position encoding in the attention mechanism, is able to obtain strong improvements, reaching the current state-of-the-art.
Description

CCS Concepts: Computing methodologies -> Shape analysis; Theory of computation -> Computational geometry

        
@article{
10.1111:cgf.14912
, journal = {Computer Graphics Forum}, title = {{
Attention And Positional Encoding Are (Almost) All You Need For Shape Matching
}}, author = {
Raganato, Alessandro
and
Pasi, Gabriella
and
Melzi, Simone
}, year = {
2023
}, publisher = {
The Eurographics Association and John Wiley & Sons Ltd.
}, ISSN = {
1467-8659
}, DOI = {
10.1111/cgf.14912
} }
Citation
Collections