Kevin Zakka, Andy Zeng, Pete Florence, Jonathan Tompson, Jeannette Bohg, Debdidatta Dwibedi,
CoRL 2021, Oral Presentation
project page / arXiv / openreview / code
To leverage the vast quantity of tutorial videos on the web, we need robots that can learn from expert demonstrators with a vastly different embodiment. We tackle this cross-embodiment visual imitation problem by learning self-supervised reward functions that encode task progress and can be maximized with downstream reinforcement learning.
Kevin Zakka, Andy Zeng, Johnny Lee, Shuran Song,
ICRA 2020, Best Paper Award in Automation Finalist
project page / blog post / arXiv / code / slides
We leverage visual geometric shape descriptors in the kit assembly task, with a nifty self-supervised data collection pipeline based on time-reversed disassembly, to create Form2Fit, a robotic system that can assemble novel objects and kits.