Conference on Robot Learning (CoRL) 2024
Workshop on Lifelong Learning for Home Robots
Room: Orion (floor 2), 8:30 am - 12:30 pm | November 9 | Munich, Germany
Time | ||
---|---|---|
8:40 | Organizers Introductory Remarks |
|
9:00 | Keynote 1: Alberta Longhini and Zheyu Zhuang Skill learning and generalisation Abstract
We will present work on visual generalisation in robot manipulation, specifically focusing on learning from demonstrations. Due to limited resources, demonstrations often prioritise trajectory diversity over visual diversity, making the learned skills vulnerable to visual domain shifts, such as shadows, harsh lighting, distractors, and background changes. These distractors can include both self-distractors (robot-like entities) and object distractors. Here we introduce strategies of towards improve robustness against visual domain shifts. We will also discuss the importance of enabling robots to develop skills that generalize across object variations, with examples focusing on adapting to the diverse properties of garments. Beyond generalization, robots must also be able to handle unseen variations, learning from mistakes in the process.
|
|
9:20 | Keynote 2: Georg Martius Making good use of offline data -- on Causal Influence and Model-based Planning Abstract
I will present two methodologies for leveraging offline data. In the first part, I will introduce CAIAC, a data augmentation technique designed to enhance the robustness of offline learning algorithms. By employing counterfactual reasoning, CAIAC synthesizes feasible transitions from static datasets, thereby mitigating policy dependence on spurious correlations and improving generalization beyond the training distribution. The second part of the talk focuses on challenges of extracting goal-conditioned behaviors from unsupervised exploration data without further environmental interaction. We demonstrate that traditional goal-conditioned reinforcement learning struggles with this offline constraint due to value estimation artifacts. Addressing this, we propose an integrated approach combining model-based planning with graph-based value aggregation, which rectifies both local and global estimation errors and significantly boosts zero-shot goal-reaching performance.
|
|
9:40 | Keynote 3: Yonatan Bisk Building Memories and Understanding Trajectories Abstract
As robots move into our lives they have to build non-parametric memories that grow as they move around the world, while still being queryable and useful for natural language interactions. Simultaneously our models need to understand fine-grained movement and interactions, again aligned to detailed and nuanced natural language. This talk will discuss recent work on moving our models toward better representations of both larger and more nuanced worlds.
|
|
10:00 | Spotlight Talks. IDs: 1,7,10,14,17 | |
10:30 | Coffee Break, Socializing, Posters | |
11:00 | Keynote 4: Ted Xiao What's Missing for Robot Foundation Models? Abstract
Intelligent robotics have seen tremendous progress in recent years. In this talk, I propose that trends in robot learning have historically almost exactly followed key trends in broader foundation modeling. After covering robotics projects which showcase the power of following such foundation modeling paradigms, I will focus on a few future-looking research directions which may suggest a unique future for how robot learning systems may develop differently from LLMs and VLMs.
|
|
11:20 | Keynote 5: Ken Goldberg Is Data All You Need?: Large Robot Action Models and Good Old Fashioned Engineering Abstract
In this presentation, I share my concerns about current trends in robotics, including task definition, data collection, and experimental evaluation. I propose that to reach expected performance levels, we will need "Good Old Fashioned Engineering (GOFE)" – modularity, algorithms, and metrics. I'll present MANIP2, a modular systems architecture that can integrate learning with well-established procedural algorithmic primitives such as Inverse Kinematics, Kalman Filters, RANSAC outlier rejection, PID modules, etc. I’ll show how we are using MANIP to improve performance on robot manipulation tasks such as grasping, cable untangling, surgical suturing, motion planning, and bagging, and propose open directions for research.
|
|
11:40 | Panel Discussion Panelists: Ken Goldberg, Georg Martius, Yonatan Bisk, Ted Xiao, Alberta Longhini, Zheyu Zhuang |
|
12:20 | Organizers Closing Remarks |