Benjamin Lu Davis


Graduated in 2022


  • Lakeside School Seattle, 2007-2013
  • Brown University, 2013-2020
  • Music BA ‘18
  • Engineering BA ‘18
  • Electrical/Computer Engineering ScM ‘19
  • Physics ScM ‘20
  • University of Washington, 2020-2024
  • Electrical/Computer Engineering PhD ‘24
  • Statistics, Concurrent ScM ‘22

I have a very huge plethora of various interdisciplinary research interests that run the entire gamut; here are those of which are the most pertinent to UW’s department of Statistics (in no particular order/sort of train of thought to provide a flavor):

  • Classical Statistical Machine Learning/Data Science
  • Pattern Theory, Information Theory, and Chaos
  • Probability Theory and Measure Theory
  • Monte Carlo Methods, Random Algorithms, and their applications to science and engineering
  • Pseudorandom Number Generation
  • Statistical Signal Processing/Stochastic Processes/Time Series Analysis
  • Stochastic Control and Stochastic Optimal Control
  • Optimization Theory and Methods/Convex Analysis applied to Data Science
  • Wavelet Analysis applied to Data Science
  • Econometrics/Computational Finance Risk Management
  • Probabilistic/Algorithmic Game Theory
  • Statistical Mechanics, Statistical Thermodynamics, and Statistical Physics/Chemistry
  • Statistics and Probability in Quantum Computing
  • Statistical Methods for Audio/Speech/Music Signal Processing (1D Samples)
  • Statistical Methods for Computer Vision and Image Processing (2D Pixels)
  • Statistical Methods for Medical Image Analysis (3D Voxels)
  • Statistical Linguistics/Natural Language Processing
  • Statistical Genetics/Population Genetics
  • Biostatistics (basically hospital record data science)
  • Experimental Design and Analysis: studies in medicine/pharmacy, psychology, and social science
  • Statistical Epistemology: interpretation and/or psychological implications of probability and data
  • Deep Learning** (My philosophy is that the sledgehammer of deep learning should only ever be used as a last resort when the scalpel of classical machine learning doesn’t work and all else fails. Just two parameters/pieces of information of the slope and intercept of a best-fit line will end up telling you much more about the structure of a bunch of data points/samples than over a bajillion confusing numerous neural network parameters which may end up wasting more memory space than the entire original data set itself!)

My own original research projects are catalogued on my website:

Feel free to reach out and shoot me an email.

And feel free to check out my fun own original statistics-related comedic webcomic, “The Daily Deeds of Dayton
the Data Scientist”: