Sham Kakade is a Washington Research Foundation Data Science Chair, with a joint appointment in both the Allen School and Department of Statistics at the University of Washington. He works on the theoretical foundations of machine learning, focusing on designing (and implementing) statistically and computationally efficient algorithms. Amongst his contributions with a diverse set of collaborators are: establishing principled approaches in reinforcement (including the natural policy gradient, conservative policy iteration, and the PAC-MDP framework); optimal algorithms in stochastic and non-stochastic multi-armed bandit problems (including the linear bandit and the Gaussian process bandit); computationally and statistically efficient spectral algorithms for estimation of latent variable models (including estimation of mixture of Gaussians, latent Dirichlet allocation, hidden Markov models, and overlapping communities in social networks); faster algorithms for large scale convex and nonconvex optimization. He is the recipient of the IBM Goldberg best paper award (in 2007) for contributions to fast nearest neighbor search and the best paper, INFORMS Revenue Management and Pricing Section Prize (2014). He has been program chair for COLT 2011.
Sham completed his Ph.D. at the Gatsby Computational Neuroscience Unit at University College London, under the supervision of Peter Dayan, and he was a postdoc at the University of Pennsylvania, under the supervision of Michael Kearns. Sham was an undergraduate at Caltech, studying in physics under the supervision of John Preskill. Sham has been a Principal Researcher at Microsoft Research, New England; an associate professor at the Department of Statistics, Wharton, UPenn; and an assistant professor at the Toyota Technological Institute at Chicago.
For more information, visit Sham's website.