TransforMerger: Transformer-based Voice-Gesture Fusion for Robust Human-Robot Communication

Video Materials (soon) 🔗 Paper at ArXiv 📰 PDF Source code

Petr Vanc (1), and Karla Stepanova (1)

(1) Czech Technical University in Prague, Czech Institute of Informatics, Robotics, and Cybernetics
petr.vanc@cvut.cz, karla.stepanova@cvut.cz

Abstract

As human-robot collaboration advances, natural and flexible communication methods are essential for effective robot control. Traditional methods relying on a single modality or rigid rules struggle with noisy or misaligned data as well as with object descriptions that do not perfectly fit the predefined object names (e.g. 'Pick that red object'). We introduce TransforMerger, a transformer-based reasoning model that infers a structured action command for robotic manipulation based on fused voice and gesture inputs. Our approach merges multimodal data into a single unified sentence, which is then processed by the language model. We employ probabilistic embeddings to handle uncertainty and we integrate contextual scene understanding to resolve ambiguous references (e.g., gestures pointing to multiple objects or vague verbal cues like "this"). We evaluate TransforMerger in simulated and real-world experiments, demonstrating its robustness to noise, misalignment, and missing information. Our results show that TransforMerger outperforms deterministic baselines, especially in scenarios requiring more contextual knowledge, enabling more robust and flexible human-robot communication.

Video

Video showing the usage of the proposed system.