August 08, 2024

Emre Can Açıkgöz, M.S. 2024


Current position: PhD Student at University of Illinois Urbana-Champaign, Illinois (Homepage)
MS Thesis: Grounding Language in Motor Space: Exploring Robot Action Learning and Control from Proprioception. August 2024. (PDF, Presentation)
Thesis Abstract:

Language development, particularly in its early stages, is deeply correlated with sensory-motor experiences. For instance, babies develop progressively via unsupervised exploration and incremental learning, such as labeling the action of ”walking” by first discovering to move their legs via trial and error. Drawing inspiration from this developmental process, our study explores robot action learning by trying to map linguistic meaning onto non-linguistic experiences in autonomous agents, specif- ically for a 7-DoF robot arm. While current grounded language learning (GLL) in robotics emphasizes visual grounding, our focus is on grounding language in a robot’s internal motor space. We investigate this through two key aspects: Robot Action Classification and Language-Guided Robot Control, both within a ’Blind Robot’ scenario by relying solely on proprioceptive information without any visual input in pixel space. In Robot Action Classification, we enable robots to understand and categorize their actions using internal sensory data by leveraging Self-Supervised Learning (SSL) through pretraining an Action Decoder for better state representation. Our SSL-based approach significantly surpasses other baselines, particularly in scenarios with limited data. Conversely, Language-Guided Robot Control poses a greater challenge by requiring robots to follow natural language instructions, interpret linguistic commands, generate a sequence of actions, and continuously interact with the environment. To achieve that, we utilize another Action Decoder pretrained on sensory state data and then fine-tune it alongside a Large Language Model (LLM) for better linguistic reasoning abilities. This integration enables the robot arm to execute language-guided manipulation tasks in real time. We validated our approach using the popular CALVIN Benchmark, where our methodology based on SSL significantly outperformed traditional architectures, particularly in low-data scenarios on action classification. Moreover, in the instruction following tasks, our Action Decoder-based framework achieved on-par results with large Vision-Language Models (VLMs) in the CALVIN table-top environment. Our results underscore the importance of robust state representations and the potential of the robot’s internal motor space for learning embodied tasks.


Full post...