Conference on Robot Learning (CoRL) 2024

Workshop on Lifelong Learning for Home Robots

Room: Orion (floor 2), 8:30 am - 12:30 pm | November 9 | Munich, Germany


Abstract
Home robots should swiftly adapt by learning user preferences, understanding environmental constraints, and mastering new tasks. These lifelong learning processes should involve user demonstrations or dialogues, allowing the robot to receive guidance and resolve ambiguities through interaction with the user. Despite significant advancements in both high-level and low-level behavioral systems for robotics, several challenges persist regarding their applicability in home environments. Existing systems are typically designed for specific tasks and often struggle to adjust to changes or learn from ongoing experiences, rendering them less effective in dynamic, real-world settings like homes. To address these challenges, it is crucial to develop robots capable of lifelong learning. Lifelong learning enables robots to build complex, nuanced representations of their surroundings, facilitating continuous adaptation and informed decision-making based on their current environment, allowing robots to transcend the limitations of their initial training and evolve through interactions and experiences.
Event Information This is a primarily in-person workshop, held at the 2024 Conference on Robot Learning (CoRL), in Munich, Germany on 9 November 2024, starting at 08:30 CET.

The workshop location is the Orion room, in the Science Congress Center, at Technical University of Munich (Garching) campus. You might find the venue information on the CoRL 2024 website helpful.


Schedule
Time
8:40 Organizers
Introductory Remarks
9:00 Keynote 1: Alberta Longhini and Zheyu Zhuang
Skill learning and generalisation
9:20 Keynote 2: Georg Martius
Making good use of offline data -- on Causal Influence and Model-based Planning
9:40 Keynote 3: Yonatan Bisk
Building Memories and Understanding Trajectories
10:00 Spotlight Talks. IDs: 1,7,10,14,17
10:30 Coffee Break, Socializing, Posters
11:00 Keynote 4: Ted Xiao
What's Missing for Robot Foundation Models?
11:20 Keynote 5: Ken Goldberg
Is Data All You Need?: Large Robot Action Models and Good Old Fashioned Engineering
11:40 Panel Discussion
Panelists: Ken Goldberg, Georg Martius, Yonatan Bisk, Ted Xiao, Alberta Longhini, Zheyu Zhuang
12:20 Organizers
Closing Remarks


Call for Papers
Submission Guidelines
CoRL LLHomeRobots 2024 suggests 4+N (short) or 8+N (full) paper length formats — i.e., 4 or 8 pages of main content with unlimited additional pages for references, appendices, etc.

Submissions will be handled through CMT: https://cmt3.research.microsoft.com/LLHOMEROBOTS2024

We will accept the official LaTeX paper template, provided by CoRL 2024.

Our review process will be double-blind, following the CoRL 2024 paper submission policy.

All accepted papers will be invited for poster presentations; the highest-rated papers, according to the Technical Program Committee, will be given spotlight presentations. Accepted papers will be made available online on this workshop website as non-archival reports, allowing authors to also submit their works to future conferences or journals. We will highlight the Best Reviewer and reveal the Best Paper Award during the closing remarks at the workshop event.

Key areas we are targeting in this workshop include:

  • How can robots effectively learn from human demonstrations and feedback in a natural and intuitive manner?
  • How can robots learn and adapt to the individual preferences and routines of different household members?
  • What mechanisms are needed to ensure personalized and user-specific interactions and task performances?
  • What representations can serve as robust and flexible memory systems that allow household robots to recall past experiences and apply this knowledge to new situations?
  • What are the optimal strategies for balancing short-term (e.g., ‘working memory’) and long-term memory in robotic systems?
  • How can robots autonomously detect and correct errors in task execution, with or without human intervention?
  • What frameworks can enable robots to learn from their mistakes and improve over time?
  • Few-shot learning or prompting for learning new behaviours
  • Neural architectures of large models to support lifelong learning
  • What metrics should we be focusing on?
  • The role of reinforcement learning and LLMs in lifelong learning
Important Dates
  • Submission deadline: 15 October 2024, 23:59 AOE.
  • Author Notifications: 25 October 2024, 23:59 AOE.
  • Camera Ready: 3 November 2024, 23:59 AOE.
  • Workshop: 9 November 2024, 08:30-12:00 CET

Accepted Papers
  • (Paper ID #1) FLaRe: Achieving Masterful and Adaptive Robot Policies with Large-Scale Reinforcement Learning Fine-Tuning [paper] (spotlight)
    Jiaheng Hu, Rose Hendrix, Ali Farhadi, Aniruddha Kembhavi, Roberto Martin-Martin, Peter Stone, Kuo-Hao Zeng, Kiana Ehsani
  • (Paper ID #2) Robot Utility Models: General Policies for Zero-Shot Deployment in New Environments [paper]
    Haritheja Etukuru, Norihito Naka, Zijin Hu, Seungjae Lee, Julian Mehu, Aaron Edsinger, Christopher Paxton, Soumith Chintala, Lerrel Pinto, Nur Muhammad (Mahi) Shafiullah
  • (Paper ID #3) Learning Precise, Contact-Rich Manipulation through Uncalibrated Tactile Skins [paper]
    Venkatesh Pattabiraman, Yifeng Cao, Siddhant Haldar, Lerrel Pinto, Raunaq M Bhirangi
  • (Paper ID #4) AnySkin: Plug-and-play Skin Sensing for Robotic Touch [paper]
    Raunaq M Bhirangi, Venkatesh Pattabiraman, Enes Erciyes, Yifeng Cao, Tess Hellebrekers, Lerrel Pinto
  • (Paper ID #5) Local Policies Enable Zero-shot Long-horizon Manipulation [paper]
    Murtaza Dalal, Min Liu, Walter Talbott, Chen Chen, Deepak Pathak, Jian Zhang, Ruslan Salakhutdinov
  • (Paper ID #6) Embodied-RAG: General Non-parametric Embodied Memory for Retrieval and Generation [paper]
    Quanting Xie, So Yeon Min, Tianyi Zhang, Kedi Xu, Aarav Bajaj, Ruslan Salakhutdinov, Matthew Johnson-Roberson, Yonatan Bisk
  • (Paper ID #7) DynaMem: Online Dynamic Spatio-Semantic Memory for Open World Mobile Manipulation [paper] (spotlight) (best paper award!)
    Peiqi Liu, Zhanqiu Guo, Mohit Warke, Soumith Chintala, Christopher Paxton, Nur Muhammad (Mahi) Shafiullah, Lerrel Pinto
  • (Paper ID #8) Multi-Modal 3D Scene Graph Updater for Shared and Dynamic Environments [paper]
    Emilio Olivastri, Jonathan Francis, Alberto Pretto, Niko Sünderhauf, Krishan Rana
  • (Paper ID #9) Online Continual Learning for Interactive Instruction Following Agents [paper]
    Byeonghwi Kim, MinHyuk Seo, Jonghyun Choi
  • (Paper ID #10) Continuously Improving Mobile Manipulation with Autonomous Real-World RL [paper] (spotlight)
    Russell Mendonca, Emmanuel Panov, Bernadette Bucher, Jiuguang Wang, Deepak Pathak
  • (Paper ID #11) STEVE-Audio: Expanding the Goal Conditioning Modalities of Embodied Agents in Minecraft [paper]
    Nicholas Lenzen, Amogh Prashant Raut, Andrew Melnik
  • (Paper ID #12) Learning from Demonstrations with 3D Gaussian Splatting [paper]
    Michael Büttner, Jonathan Francis, Helge Rhodin, Andrew Melnik
  • (Paper ID #13) Parental Guidance: Efficient Lifelong Learning through Evolutionary Distillation [paper]
    Zhengyu Zhang, Quanquan Peng, Rosario Scalise, Bryon Boots
  • (Paper ID #14) OLiVia-Nav: An Online Lifelong Vision Language Approach for Mobile Robot Social Navigation [paper] (spotlight)
    Siddarth Narasimhan, Aaron H Tan, Daniel Choi, Goldie Nejat
  • (Paper ID #15) BYE: Build Your Encoder with One Sequence of Exploration Data for Long-Term Dynamic Scene Understanding and Navigation [paper]
    Chenguang Huang, Wolfram Burgard
  • (Paper ID #16) LaNMP: A Language-Conditioned Mobile Manipulation Benchmark for Autonomous Robots [paper]
    Ahmed Jaafar, Shreyas Sundara Raman, Yichen Wei, Sofia Juliani, Anneke Wernerfelt, Benedict Quartey, Ifrah Idrees, Jason Liu, Stefanie Tellex
  • (Paper ID #17) Out-of-Distribution Recovery with Object-Centric Keypoint Inverse Policy For Visuomotor Imitation Learning [paper] (spotlight)
    George Gao, Tianyu Li, Nadia Figueroa
  • (Paper ID #19) Cognitive Planning for Object Goal Navigation using Generative AI Models [paper]
    Arjun P S, Andrew Melnik, G C Nandi
  • (Paper ID #20) Visual Rearrangement in Embodied AI with 3D Gaussian Splatting and Dense Feature Matching [paper]
    Arjun P S, Andrew Melnik, G C Nandi
  • (Paper ID #21) 2HandedAfforder: Learning Precise Actionable Bimanual Affordances from Human Videos [paper]
    Marvin Heidinger, Snehal Jauhri, Vignesh Prasad, Georgia Chalvatzaki

Organizers

Andrew Melnik

University of Bremen

Jonathan Francis

Bosch Center for Artificial Intelligence

Krishan Rana

QUT Centre for Robotics

Uksang Yoo

Carnegie Mellon University

Arthur Bucker

Carnegie Mellon University

Dimity Miller

QUT Centre for Robotics

Chris Paxton

Hello Robot Inc.



Contact and Information
Direct questions to llhomerobots@gmail.com.

Subscribe to our mailing list to stay updated on news from our workshop series.
-->