Algorithmic bias is often of greatest concern in contexts that are shaped by a long history of discrimination, marginalization, and procedural injustice.  Taking the US child welfare system as the primary case study, I will discuss what we have learned about the development, deployment, evaluation and impact of predictive risk assessment algorithms in inequitable systems.  I will describe the role that “non-universal” data collection and problem formulation—specifically, the choice of prediction target—play as potential drivers of disparities in the resulting predictive models.  Along the way, I will highlight demands for procedural, informational, distributive and interpersonal justice that have emerged from qualitative studies of affected communities and diverse stakeholders.  


Details on the Blackwell Seminar can be found here.