MANIP

Integrating Interactive Perception into Long-Horizon Robot Manipulation Systems

1 The AUTOLab at UC Berkeley

2 The Toyota Research Institute

*Denotes Equal Contribution

Accepted to IROS 2024




Manip Diagram
MANIP Diagram.

Overview

Interactive perception, where a robot acts to change the environment to facilitate perception, can be useful for robot manipulation, especially for long-horizon tasks, such as folding clothes, routing cables, untangling knots, and surgical suturing. However, it can often be unclear how to integrate interactive perception primitives into manipulation policies. Long-horizon robot manipulation tasks pose significant challenges due to a variety of factors, including partial observability, compounding errors, and the increasing complexity of the state-action space as the task length increases. We propose MANIP, a modular architecture that can help integrate combinations of interactive perception and task completion policies and primitives. MANIP facilitates switching between minimizing uncertainty with interactive perception and maximizing reward with task policies. We show that MANIP can be mapped to and potentially improve 3 prior robot manipulation systems. We then apply MANIP to extend our previously published cable tracing system. We show that the resulting system improves cable tracing by up to 88% across 4 tiers of difficulty and 120 physical robot trials for adversarial settings with cables.

Example Rollout