Preference learning is the task of aggregating individual preferences, such as rankings or ratings, in order to learn the overall preferences of a population. In most settings, preference aggregation is performed deterministically and fails to capture any uncertainty in the overall preferences. Furthermore, there are no statistical models for scenarios in which rankings and ratings arise simultaneously, which occur in a variety of real-world settings.

In the first project, we propose the first unified statistical model for rankings and ratings. We develop an efficient tree-search algorithm for frequentist estimation, demonstrate how model outputs can be used to understand group preferences and rank objects with confidence, and apply our model to real grant panel review data that collected both rankings and ratings.

In the second project, we extend the proposed model to account for heterogeneous preferences and additional types of rankings, such as pairwise comparisons. Additionally, we propose methods for Bayesian estimation in order to improve computational efficiency and allow for the incorporation of prior information, which is helpful when preference data is limited. Both projects demonstrate how to combine rankings and ratings in order to identify consensus and measure the associated statistical uncertainty.