Multimodal Early Raw Data Fusion for Environment Sensing in Automotive Applications

Abstract
Autonomous Vehicles became every day closer to becoming a reality in ground transportation. Computational advancement has enabled powerful methods to process large amounts of data required to drive on streets safely. The fusion of multiple sensors presented in the vehicle allows building accurate world models to improve autonomous vehicles' navigation. Among the current techniques, the fusion of LIDAR, RADAR, and Camera data by Neural Networks has shown significant improvement in object detection and geometry and dynamic behavior estimation. Main methods propose using parallel networks to fuse the sensors' measurement, increasing complexity and demand for computational resources. The fusion of the data using a single neural network is still an open question and the project's main focus. The aim is to develop a single neural network architecture to fuse the three types of sensors and evaluate and compare the resulting approach with multi-neural network proposals.
Description

CCS Concepts: Computing methodologies --> Object identification; Object detection; Applied computing --> Transportation

        
@inproceedings{
10.2312:egp.20221006
, booktitle = {
Eurographics 2022 - Posters
}, editor = {
Sauvage, Basile
and
Hasic-Telalovic, Jasminka
}, title = {{
Multimodal Early Raw Data Fusion for Environment Sensing in Automotive Applications
}}, author = {
Pederiva, Marcelo Eduardo
and
Martino, José Mario De
and
Zimmer, Alessandro
}, year = {
2022
}, publisher = {
The Eurographics Association
}, ISSN = {
1017-4656
}, ISBN = {
978-3-03868-171-7
}, DOI = {
10.2312/egp.20221006
} }
Citation