Authors: Parker Ewen, Hao Chen, Yuzhen Chen, Anran Li, Anup Bagali, Gitesh Gunjal, Ram Vasudevan
Published on: February 08, 2024
Impact Score: 8.15
Arxiv code: Arxiv:2402.05872
Summary
- What is new: Introduction of a novel, multi-modal approach for joint representation of semantic predictions and physical property estimates in a probabilistic manner, using conjugate pairs for closed-form Bayesian updates.
- Why this is important: Challenge in estimating physical properties (like friction or weight) for robots due to the need for large amounts of labelled data and updating these models in real-time.
- What the research proposes: A multi-modal approach that combines visual and tactile measurements to update models probabilistically without the need for extra training data.
- Results: Outperformed state-of-the-art semantic classification methods in hardware experiments, demonstrated in applications like affordance-based property representation and terrain traversal with a legged robot adjusting gait based on probabilistic friction estimates.
Technical Details
Technological frameworks used: Open-source C++ and ROS interface.
Models used: Bayesian models for closed-form updates based on conjugate pairs.
Data used: Visual and tactile measurements.
Potential Impact
Robotics and automation companies, enterprises in manufacturing and logistics, and developers of robotic operating systems could benefit or need to adapt.
Want to implement this idea in a business?
We have generated a startup concept here: TactiSense.
Leave a Reply