I'm a second-year Ph.D. student in the Future Data Systems group at Stanford University, advised by Peter Bailis. My research focuses on machine learning, with an emphasis on real-world applications. In particular, I'm interested in approximation techniques to accelerate learning and inference, approaches toward interpretable representation learning, and methods for weakly-supervised learning.

I received an MS in CS from Stanford in 2015. During my Master's, I was a member of the Stanford NLP Group. Before that, I learned a bit about Physics at Princeton University.


Finding Heavily-Weighted Features in Data Streams
Kai Sheng Tai, Vatsal Sharan, Peter Bailis, and Gregory Valiant
arXiv preprint, 2017

There and Back Again: A General Approach to Learning Sparse Models
Vatsal Sharan, Kai Sheng Tai, Peter Bailis, and Gregory Valiant
arXiv preprint, 2017

Improved Semantic Representations From Tree-Structured Long Short-Term Memory Networks
Kai Sheng Tai, Richard Socher, and Christopher D. Manning
ACL 2015
[code] [slides]

Detecting gravitational waves from highly eccentric compact binaries
Kai Sheng Tai, Sean T. McWilliams, and Frans Pretorius
Physical Review D, 2014


neuralart: An implementation of the paper 'A Neural Algorithm of Artistic Style' by Gatys et al.

torch-ntm: A Neural Turing Machine implementation using Torch