Learning to Rearrange Deformable Cables, Fabrics, and Bags with Goal-Conditioned Transporter Networks

International Conference on Robotics and Automation, 2021

Abstract. Rearranging and manipulating deformable objects such as cables, fabrics, and bags is a long-standing challenge in robotic manipulation. The complex dynamics and high-dimensional configuration spaces of deformables, compared to rigid objects, make manipulation difficult not only for multi-step planning, but even for goal specification. Goals cannot be as easily specified as rigid object poses, and may involve complex relative spatial relations such as ``place the item inside the bag". In this work, we develop a suite of simulated benchmarks with 1D, 2D, and 3D deformable structures, including tasks that involve image-based goal-conditioning and multi-step deformable manipulation. We propose embedding goal-conditioning into Transporter Networks, a recently proposed model architecture for robotic manipulation that uses learned template matching to infer displacements that can represent pick and place actions. We demonstrate that goal-conditioned Transporter Networks enable agents to manipulate deformable structures into flexibly specified configurations without test-time visual anchors for target locations. We also significantly extend prior results using Transporter Networks for manipulating deformable objects by testing on tasks with 2D and 3D deformables.


Latest version (March 26, 2021): (arXiv link here). The arXiv link is the latest version and includes the supplementary material.


3-Minute Summary Video (with Captions)

Videos -- Scripted Demonstrations

These are screen recordings of the scripted demonstrator policy for tasks in DeformableRavens. The bag tasks have screen recordings slightly sped up, and some GIFs are compressed to reduce file sizes.













Videos -- Learned Policies

These are screen recordings of learned policies when deployed on test-time starting configurations. For some GIFs, to speed them up and reduce file sizes, we remove frames corresponding to pauses between pick and place actions.

Bag-Items-1. (Zoomed-in) Transporter trained on 100 demos, and deployed at test-time. It successfully opens the bag, inserts the cube, and brings the bag to the target.

Bag-Items-2. Transporter trained on 1000 demos, and deployed at test-time. It successfully opens the bag, inserts both blocks, and brings the bag to the target.

Bag-Color-Goal. Transporter-Goal-Split trained on 10 demos, and deployed at test-time. The goal image (not shown above) shows the item in the red bag, in which the policy correctly inserts the block.

Videos -- Limitations and Failure Cases

These are screen recordings showing some informative failure cases, which may happen with scripted demonstrators or with learned policies. These motivate some interesting future work directions, such as learning policies that can explicitly recover from failures.

Bag-Items-1. Possibly the most common failure case. Above is the scripted policy, but this also occurs with learned Transporters. The cube is at a reasonable spot, but falls out when the robot attempts to lift it.

Bag-Items-1. Failure with a learned Transporter policy (trained with 10 demos). The policy repeatedly attempts to insert the cube in the bag but fails, and (erroneously) brings an empty bag to the target.


Here is the GitHub link: https://github.com/DanielTakeshi/deformable-ravens. If you have questions, please use the public issue tracker. I will try to actively monitor the issue reports.

Demonstration Data

These are zipped files that contain demonstration data of 1000 episodes. These are used to train policies.

  1. Cable-Ring --- (LINK (4.0G))

  2. Cable-Ring-Notarget --- (LINK (3.9G))

  3. Cable-Shape --- (LINK (4.1G))

  4. Cable-Shape-Notarget --- (LINK (4.1G))

  5. Cable-Line-Notarget --- (LINK (3.3G))

  6. Fabric-Cover --- (LINK (1.6G))

  7. Fabric-Flat --- (LINK (2.2G))

  8. Fabric-Flat-Notarget --- (LINK (2.2G))

  9. Bag-Alone-Open --- (LINK (2.5G))

  10. Bag-Items-1 --- (LINK (2.2G))

  11. Bag-Items-2 --- (LINK (2.8G))

  12. Bag-Color-Goal --- (LINK (2.2G))

  13. Block-Notarget --- (LINK (1.0G))

These are zipped files that contain demonstration data for 20 goals. These are only used for the goal-conditioned cases to ensure evaluation is done in a reasonably consistent manner.

  1. Cable-Shape-Notarget --- (LINK)

  2. Cable-Line-Notarget --- (LINK)

  3. Fabric-Flat-Notarget --- (LINK)

  4. Bag-Color-Goal --- (LINK)

  5. Block-Notarget --- (LINK)

To unzip, run tar -zvxf [filename].tar.gz. Some of the data files will unzip to different file names, since we changed some task names for the purpose of the paper (while keeping the code and data with the original task names). Specifically, (1) the three "fabric" tasks are referred to as "cloth", (2) "bag-items-1" and "bag-items-2" are referred to as "bag-items-easy" and "bag-items-hard", and (3) "block-notarget" is referred to as "insertion-goal".

The demonstration data should be zipped to data/ and the goals data should be zipped to goals/.


    author    = {Daniel Seita and Pete Florence and Jonathan Tompson and Erwin Coumans and Vikas Sindhwani and Ken Goldberg and Andy Zeng},
    title     = {{Learning to Rearrange Deformable Cables, Fabrics, and Bags with Goal-Conditioned Transporter Networks}},
    booktitle = {IEEE International Conference on Robotics and Automation (ICRA)},
    Year      = {2021}


Daniel Seita is supported by the Graduate Fellowships for STEM Diversity (website). We thank Xuchen Han for assistance with deformables in PyBullet, and Julian Ibarz for helpful feedback on writing.