I'm a second-year Ph.D. student in the Future Data Systems group at Stanford University, advised by Peter Bailis. My research focuses on machine learning, with an emphasis on real-world applications. In particular, I'm interested in approximation techniques to accelerate learning and inference, approaches toward interpretable representation learning, and methods for weakly-supervised learning.

I received an MS in CS from Stanford in 2015. During my Master's, I was a member of the Stanford NLP Group. Before that, I learned a bit about Physics at Princeton University.


Fast and Accurate Low-Rank Factorization of Compressively-Sensed Data
Vatsal Sharan*, Kai Sheng Tai*, Peter Bailis, and Gregory Valiant
* Equal contribution
arXiv preprint, 2018

Moment-Based Quantile Sketches for Efficient High Cardinality Aggregation Queries
Edward Gan, Jialin Ding, Kai Sheng Tai, Vatsal Sharan, and Peter Bailis
arXiv preprint, 2018

Sketching Linear Classifiers over Data Streams
Kai Sheng Tai, Vatsal Sharan, Peter Bailis, and Gregory Valiant
[code] [extended abstract]

Improved Semantic Representations From Tree-Structured Long Short-Term Memory Networks
Kai Sheng Tai, Richard Socher, and Christopher D. Manning
ACL 2015
[code] [slides]

Detecting gravitational waves from highly eccentric compact binaries
Kai Sheng Tai, Sean T. McWilliams, and Frans Pretorius
Physical Review D, 2014


index-baselines: Comparing learned index structures to classical data structures like cuckoo hashing

neuralart: An implementation of the paper 'A Neural Algorithm of Artistic Style' by Gatys et al.

torch-ntm: A Neural Turing Machine implementation using Torch