I have two main lines of work, and I am recruiting talented PhD students in both areas.
I can take graduate students through either UC Davis' graduate program in Computer Science or in Applied Mathematics. If you are admitted to the UC Davis graduate program in Mathematics, it is (to my knowledge) possible to transfer into Applied Mathematics.
I am interested in revisiting and solving fundamental statistical problems, including those as basic as mean estimation: suppose we get i.i.d. samples from a real-valued distribution, how do we most accurately estimate the distribution mean? The sample mean/average is the ubiquitous estimator, yet it is also well-known to be sensitive to extreme values. In fact, the sample mean is provably (highly) sub-optimal in finite-sample regimes.
The question remains, then, what is the best possible estimator?
Paul Valiant and I constructed the first 1-d mean estimator whose error is tight even in the leading constant, for finite-but-unknown variance distributions. The same estimator is furthermore robust to infinite-variance distributions and adversarially-corrupted samples.
What's next? Right now, even the construction of an optimal 2-d mean estimator is an open problem! In addition to mean estimation, there are plenty of other interesting and challenging problems to work on in this space, and there is a strong connection to the field of robust statistics, where parts of the input data can be adversarially corrupted. I am looking for mathematically-strong students to join me in this line of work.
Many real-life decision making can be modelled as optimization problems (e.g. scheduling, trading), but these optimizations might depend on parameters that are revealed after the solution is needed. For example, to schedule a new week's restaurant shifts, we will need to predict the customer demand over the next week.
The field of Predict+Optimize (or Smart-Predict-Then-Optimize), pioneered by the work of Elmachtoub and Grigas, is concerned with training prediction models that make good parameter predictions from relevant features, so that the resulting optimization solution has good objective value even under the true parameters revealed in the future. This requires training for models that are aware of the optimization problems downstream.
My team was the first to propose an extension to the framework, where even the optimization constraints can be uncertain — this is highly non-trivial, because if we mis-predict the constraint set, we can't even guarantee feasibility under the true parameters! My current focus is to vastly expand the space of settings where we can intelligently do Predict+Optimize.
This line of work is mostly empirical and not theoretical (so far). Basic knowledge in multivariable calculus, (discrete) optimization/mathematical programming, as well as strong programming skills, are all desirable.
If you're interested in joining my group, please feel free to reach out via email. Do prepend the string "[Maillard Reaction]" to your email subject, so I know you read through this page.