I'm a 2nd year Computer Science Department Ph.D. student at Brown University, where I'm fortunate to be advised by Daniel Ritchie.

Broadly, I'm interested in using machine learning and artifical intelligence techniques to better understand and represent 3D scenes and objects. Recently, I've been exploring how to best combine explicit symbolic representations and neural methods for generating structures of 3D shapes.

Learning to Infer Shape Programs Using Latent Execution Self Training
We developed a method for shape program inference based on a modified version of self-training that leverages black box program executors to avoid wrongly associated pseudo labels. We demonstrate that it converges faster and achieves better reconstruction quality than policy gradient reinforcement learning.
ShapeAssembly: Learning to Generate Programs for 3D Shape Structure Synthesis
SIGGRAPH Asia 2020
Paper | Project Page | Code | Video | Supplemental
Using a hybrid neural-procedural approach, we present a deep generative model that learns to synthesize 3D shapes by writing programs in ShapeAssembly, a domain-specific 'assembly language' for 3D shape structures.
GANGogh: Creating Art with GANs
Williams College Independent Study
blog | github
Trained a Generative Adversarial Network to make novel 64x64 art images by training on wikiart.org