Fair classification refers to the problem of learning a classifier while satisfying a set of fairness guarantees. In this talk, I will explain the unique challenges for fair classification when training labels are potentially corrupted. The talk will start with an easier case where the error rates of corruption depend on the label class and but are homogeneous across the protected groups (e.g., the group of young and senior people).  Here I am going to introduce a new robust loss function, which I call peer loss, to help mitigate the effects of noisy labels without requiring the knowledge of error rates. Then I’ll show how heterogeneous label noise (across protected groups) leads to more systematic biases and detrimental effects. Our solution is based on performing empirical risk minimization with carefully defined loss functions and surrogate constraints that help avoid the pitfalls introduced by heterogeneous label noise.