In-situ Value-aligned Human-Robot Interactions with Physical Constraints
IROS · 2025
Abstract
Equipped with large language models (LLMs), human-centered robots are now capable of performing a wide range of tasks that were previously deemed challenging or unattainable. However, merely completing tasks is insufficient for cognitive robots, who should learn and apply human preferences to future scenarios. In this work, we propose a framework that combines human preferences with physical constraints, requiring robots to complete tasks while considering both. Firstly, we developed a benchmark of everyday household activities, which are often evaluated based on specific preferences. We then introduced in-context learning from human feedback (ICLHF), where human feedback comes from direct instructions and adjustments made intentionally or unintentionally in daily life. Extensive sets of experiments, testing the ICLHF to generate task plans and balance physical constraints with preferences, have demonstrated the efficiency of our approach.
Citation
@inproceedings{li2025iclhf,
title={In-situ Value-aligned Human-Robot Interactions with Physical Constraints},
author={Hongtao Li and Ziyuan Jiao and Xiaofeng Liu and Hangxin Liu and Zilong Zheng},
booktitle={IEEE/RSJ International Conference on Intelligent Robotics and Systems (IROS)},
year={2025}
}