I'm a first-year Computer Science Master's student at Stanford University,
specializing at the intersection of machine learning and robotics. My research centers around
robotic perception and manipulation. In particular, I'm exploring ways in which robots can self-acquire
generalizable representations that are useful and efficient for manipulation.
This summer, I interned at Google Brain Robotics
Johnny Lee, trying to teach robots generalizable kit assembly skills. The summer before, I
interned at Nimble AI
where I built the infrastructure for training
and deploying various real-time grasping algorithms for suction and parallel-jaw grasping. Before that, I was a visiting
researcher in the Khuri-Yakub Ultrasonics Group
applying machine learning to enhance ultrasonic transducers like a chemical nose and a touch screen.
Outside of work I enjoy contributing to open source and blogging
Oct. 2019 -
Proud to finally share what I've been up to this summer! Read about Form2Fit
on the Google AI Blog
Apr. 2019 -
I'll be joining Stanford starting September for my MS degree in CS.
Form2Fit: Learning Shape Priors for Generalizable Assembly from Disassembly
Kevin Zakka, Andy Zeng, Johnny Lee, Shuran Song
A general-purpose library for robotics research.
Tune the hyperparameters of your PyTorch models with a HyperBand
A Python API for the PhoXi 3D stuctured light sensors.
A processor in Verilog that computes the L2 norm of an N-dimensional
complex vector stored in doubly-linked list. Features nifty Python
scripts to automate the reading and writing of test benches.