Adaptive Monte Carlo Computation
The celebrated Monte Carlo method estimates a quantity that is expensive to compute by random sampling. In many large-scale computation problems arising from data science and machine learning, one is often interested in computing a function of many such quantities, each of which is expensive to compute. One approach to approximately compute the value of such a function is to first compute Monte Carlo estimates of all the quantities. However, in many such problems, the function value depends sensitively on only a very small but a priori unknown subset of the quantities. We demonstrate a general technique for efficiently solving these problems on three examples: 1) computing the medoid of a large number of points; 2) finding k-nearest neighbors; 3) permutation-based multiple testing. At a high level, the technique converts a computational problem into a statistical estimation problem and accelerates the process by adaptive estimation via multi-armed bandits.
David Tse received the B.A.Sc. degree in systems design engineering from University of Waterloo in 1989, and the M.S. and Ph.D. degrees in electrical engineering from Massachusetts Institute of Technology in 1991 and 1994 respectively. From 1995 to 2014, he was on the faculty of the University of California at Berkeley. He is currently the Thomas Kailath and Guanghan Xu Professor at Stanford University. He received the Claude E. Shannon Award in 2017 and was elected member of the U.S. National Academy of Engineering in 2018. Previously, he received a NSF CAREER award in 1998, the Erlang Prize from the INFORMS Applied Probability Society in 2000 and the Frederick Emmons Terman Award from the American Society for Engineering Education in 2009. He is a coauthor, with Pramod Viswanath, of the text Fundamentals of Wireless Communication, which has been used in over 60 institutions around the world. He received best paper awards from IEEE Information Theory, Communications.