Joint Inference of States, Robot Knowledge, and Human (False-)Beliefs
ICRA'20
In Proceedings of the IEEE International Conference on Robotics and Automation (ICRA)
,
2020
Aiming to understand how human (false-)belief—a core socio-cognitive ability—would affect human interactions with robots, this paper proposes to adopt a graphical model to unify the representation of object states, robot knowledge, and human (false-)beliefs. Specifically, a parse graph (PG) is learned from a single-view spatiotemporal parsing by aggregating various object states along the time; such a learned representation is accumulated as the robot’s knowledge. An inference algorithm is derived to fuse individual PG from all robots across multi-views into a joint PG, which affords more effective reasoning and inference capability to overcome the errors originated from a single view. In the experiments, through the joint inference over PGs, the system correctly recognizes human (false-)belief in various settings and achieves better cross-view accuracy on a challenging small object tracking dataset.
@inproceedings{yuan2020joint,
title={Joint Inference of States, Robot Knowledge, and Human (False-)Beliefs},
author={Yuan, Tao and Liu, Hangxin and Fan, Lifeng and Zheng, Zilong and Gao, Tao and Zhu, Yixin and Zhu, Song-Chun},
booktitle={Proceedings of the IEEE International Conference on Robotics and Automation (ICRA)},
year={2020}
}