DEN

Denny Hall

Denny Hall

We study the problem of distribution-free dependence detection and modeling through the new framework of binary expansion statistics (BEStat). The binary expansion testing (BET) avoids the problem of non-uniform consistency and improves upon a wide class of commonly used methods (a) by achieving the minimax rate in sample size requirement for reliable power and (b) by providing clear interpretations of global relationships upon rejection of independence.

Recently, multi-modal single-cell data has been growing in popularity in many areas of biomedical research and provides new opportunities to learn how different modalities coordinate within each cell. Many existing dimension reduction methods for such data estimate a low-dimensional embedding that captures all the axes of variation from either modality. While these current methods are useful, we develop the Tilted-CCA in this talk to perform a fundamentally different task.

There is growing appreciation of the perils of naively using the same data for model selection and subsequent inference; such “double-dipping” is now frowned upon in many disciplines. Sample splitting has become the de facto solution, but it reflects only one possible solution to the challenge of choosing data-driven hypotheses for subsequent inferential investigation. Indeed, there are some cases, e.g., with dependent data or when using unsupervised methods like clustering, where it is not clear how to appropriately conduct sample splitting.

In the era of data explosion, psychometricians and statisticians have been developing interpretable and computationally efficient statistical methods to measure latent factors (e.g. skills, abilities, and personalities) using large-scale assessment data.

By modeling documents as mixtures of topics, Topic Modeling allows the discovery of latent thematic structures within large text corpora, and has played an important role in natural language processing over the past decades. Beyond text data, topic modeling has proven itself central to the analysis of microbiome data, population genetics, or, more recently, single-cell spatial transcriptomics.

From clinical trials and public health to development economics and political science, randomized experiments stand out as one of the most reliable methodological tools, as they require the fewest assumptions to estimate causal effects. Adaptive experiment designs – where experimental subjects arrive sequentially and the probability of treatment assignment can depend on previously observed outcomes – are becoming an increasingly popular method for causal inference, as they offer the possibility of improved precision over their non-adaptive counterparts.

The climate change projections of the Intergovernmental Panel on Climate Change are based on scenarios for future emissions, but these are not statistically based and do not have a full probabilistic interpretation. Instead, Raftery et al. (2017) and Liu and Raftery (2021) developed probabilistic forecasts for global average temperature change to 2100.

Graphical continuous Lyapunov models offer a new perspective on modeling causally interpretable dependence structure in multivariate data by treating each independent observation as a one-time cross-sectional snapshot of a temporal process.  Specifically, the models consider multivariate Ornstein-Uhlenbeck processes in equilibrium.  This leads to Gaussian models in which the covariance matrix is determined by the continuous Lyapunov equation.

Causal intervention is an essential tool in causal inference.

Simple mathematical models of COVID-19 transmission gained prominence in the early days of the pandemic. These models provided researchers and policymakers with qualitative insight into the dynamics of transmission and quantitative predictions of disease incidence.  More sophisticated models incorporated new information about the natural history of COVID-19 disease and the interaction of infected individuals with the healthcare system, to predict diagnosed cases, hospitalization, ventilator usage, and death.

The use of Bayesian methods in the social sciences has increased rapidly over the past decade, including in the field of demography, where Bayesian methods are used to produce estimates and forecasts of demographic and health indicators across a wide range of populations. In this talk, I will briefly describe the history of use of Bayesian methods in demography, and highlight the strengths of such methods in the context of forecasting, small area estimation, and using non-representative data.

Shape restrictions such as monotonicity in one or more dimensions sometimes naturally arise. The restriction can be effectively used for function estimation without smoothing. Several exciting results on function estimation under monotonicity, and to a lesser extent, under multivariate monotonicity have been obtained in the frequentist setting. But only a little is known about how Bayesian methods work when there are restrictions on the shape. Chakraborty and Ghosal recently studied the convergence properties of a "projection-posterior" distribution.

Random field theory (RFT) is has been used in signal detection in the"massively univariate" linear models of neuroimaging.Such analyses preclude building multivariate models of activity, comparing

Coauthors: Michael Jansson and Kenichi Nagasawa

 

The Hawkes Processes is a popular type of self-exciting point process that has found application in the modeling of financial stock markets, earthquakes, and social media cascades. Their continuous time framework, however, necessitates that data collected for inference be accurate. However, for real-time monitors of data, for example in remote sensing or cybersecurity, accurate detection of events is challenging.

Recently, addressing “spatial confounding” has become a major topic in spatial statistics. However, the literature has provided conflicting definitions, and many proposed definitions do not address the issue of confounding as it is understood in causal inference.

The US Census Bureau will deliberately corrupt data sets derived from the 2020 US Census in an effort to maintain privacy, suggesting a painful trade-off between the privacy of respondents and the precision of economic analysis. To investigate whether this trade-off is inevitable, we formulate a semiparametric model of causal inference with high dimensional corrupted data. We propose a procedure for data cleaning, estimation, and inference with data cleaning-adjusted confidence intervals.

Randomized experiments allow for consistent estimation of the average treatment effect based on the difference in mean outcomes without strong modeling assumptions. Appropriate use of pretreatment covariates can further improve the estimation efficiency. Missingness in covariates is nevertheless common in practice and raises an important question: should we adjust for covariates subject to missingness, and if so, how? The unadjusted difference in means is always unbiased.

Quantifying treatment effect heterogeneity is a crucial task in many areas of causal inference, e.g. optimal treatment allocation and estimation of subgroup effects. We study the problem of estimating the level sets of the conditional average treatment effect (CATE), identified under the no-unmeasured-confounders assumption. Given a user-specified threshold, the goal is to estimate the set of all units for whom the treatment effect exceeds that threshold.

Emerging modern datasets in public health call for development of innovative statistical methods that can leverage complex real-world data settings. We first discuss a stochastic epidemic model that incorporates contact tracing data to make inference about transmission dynamics on an adaptive contact network. An efficient data-augmented inference scheme is designed to accommodate partially epidemic observations.

Reinforcement learning is a general technique that allows an agent to learn an optimal policy and interact with an environment in sequential decision making problems. The goodness of a policy is measured by its value function starting from some initial state. This talk includes a few topics about constructing statistical inference for a policy's value in infinite horizon settings where the number of decision points diverges to infinity. Applications in real world examples will also be discussed.

We introduce the localization schemes framework for analyzing the mixing time of Markov chains. Our framework unifies and extends the previous proof techniques via spectral independence framework by Anari, Liu and Oveis Gharan and the stochastic localization process used for proving high dimensional properties of log-concave measures.

Randomized control trials (RCTs) have been the gold standard to evaluate the effectiveness of a program, policy, or treatment on an outcome of interest. However, many RCTs assume that study participants are willing to share their (potentially sensitive) data, specifically their response to treatment. This assumption, while trivial at first, is becoming difficult to satisfy in the modern era, especially in online settings where there are more regulations to protect individuals' data.

The method of difference-in-differences (DID) is widely used to study the causal effect of policy interventions in observational studies. DID employs a before and after comparison of the treated and control units to remove bias due to time-invariant unmeasured confounders under the parallel trends assumption. Estimates from DID, however, will be biased if the outcomes for the treated and control units evolve differently in the absence of treatment, namely if the parallel trends assumption is violated.

With observational data alone, causal inference is a challenging problem. The task becomes easier when having access to data collected from perturbations of the underlying system, even when the nature of these is unknown. In this talk, we will describe methods that use such perturbation data to identify plausible causal mechanisms and to obtain robust predictions. Specifically, in the context of Gaussian linear structural equation models, we first characterize the interventional equivalence class of DAGs.

In contemporary applications, it is common to collect very large data sets with the vaguely-defined goal of hypothesis generation. Once a dataset is used to generate a hypothesis,  we might wish to test that hypothesis on the same set of data. However, this type of "double dipping" violates a cardinal rule of statistical hypothesis testing: namely, that we must decide what hypothesis to test before looking at the data.

Modern machine learning algorithms have achieved remarkable performance in a myriad of applications, and are increasingly used to make impactful decisions in the hiring process, criminal sentencing, healthcare diagnostics and even to make new scientific discoveries. The use of data-driven algorithms in high-stakes applications is exciting yet alarming: these methods are extremely complex, often brittle, notoriously hard to analyze and interpret.

Scientific research is often concerned with questions of cause and effect. For example, does eating processed meat cause certain types of cancer? Ideally, such questions are answered by randomized controlled experiments. However, these experiments can be costly, time-consuming, unethical or impossible to conduct. Hence, often the only available data to answer causal questions is observational.  

Change point detection is a popular tool for identifying locations in a data sequence where an abrupt change occurs in the data distribution and has been widely studied for Euclidean data. Modern data very often is non- Euclidean, for example distribution valued data or network data. Change point detection is a challenging problem when the underlying data space is a metric space where one does not have basic algebraic operations like addition of the data points and scalar multiplication. 

Advisor: Jon Wellner We consider the problem of forming confidence intervals and tests for the location of the mode in the setting of nonparametric estimation of a log-concave density. We thus study the class of log-concave densities with fixed and known mode. We find the maximum likelihood estimator for this class, give a characterization of it, and, under the null hypothesis, show our estimator is uniformly consistent and is $n^{2/5}$-tight at the mode. We also show uniqueness of the analogous limiting "estimator" of a quadratic function with white noise.

Advisor: Adrian E. Raftery