Authors: Martín Bayón-Gutiérrez, María Teresa García-Ordás, Héctor Alaiz Moretón, Jose Aveleira-Mata, Sergio Rubio Martín, José Alberto Benítez-Andrades
Published on: May 14, 2024
Impact Score: 7.2
Arxiv code: Arxiv:2405.08429
Summary
- What is new: The introduction of a Twin Encoder-Decoder Neural Network (TEDNet) model for simultaneous feature extraction from camera and LiDAR data.
- Why this is important: The need for accurate road surface estimation for safe navigation of autonomous vehicles.
- What the research proposes: Developing a TEDNet model that integrates Bird’s Eye View projections for real-time semantic segmentation of road surfaces using both camera and LiDAR data.
- Results: The model performs comparably with state-of-the-art methods and processes data at the same frame-rate as LiDAR and cameras, making it suitable for real-time applications.
Technical Details
Technological frameworks used: Twin Encoder-Decoder Neural Network (TEDNet)
Models used: Semantic segmentation models
Data used: Kitti-Road dataset
Potential Impact
Autonomous vehicle manufacturers, automotive safety technology companies, and providers of navigation systems could benefit from the insights of this paper.
Want to implement this idea in a business?
We have generated a startup concept here: RoadSenseAI.
Leave a Reply