Algorithmic fairness has received increased attention in socially sensitive domains. While rich literature on mean fairness has been established, research on quantile fairness remains sparse but vital. To fulfill great needs and advocate the significance of quantile fairness, we propose a novel framework to learn a real-valued quantile function under the fairness requirement of Demographic Parity with respect to sensitive attributes, such as race or gender, and thereby derive a reliable fair prediction interval. Using optimal transport and functional synchronization techniques, we establish theoretical guarantees of distribution-free coverage and exact fairness for the induced prediction interval constructed by fair quantiles. A hands-on pipeline is provided to incorporate flexible quantile regressions with an efficient fairness adjustment post-processing algorithm. We demonstrate the superior empirical performance of this approach on several benchmark datasets. Our results show the model’s ability to uncover the mechanism underlying the fairness-accuracy trade-off in a wide range of societal and medical applications.

GitHub Link

Paper Link

Word Embeddings via Causal Inference: Gender Bias Reducing and Semantic Information Preserving (AAAI 2022)

With widening deployments of natural language processing (NLP) in daily life, inherited social biases from NLP models...

Balancing gender bias in job advertisements with text-level bias mitigation(Frontiers in big Data 2022)

Despite progress towards gender equality in the labor market over the past few decades, gender segregation in labor f...

Conformalized Fairness via Quantile Regression(Neurips 2022)

Algorithmic fairness has received increased attention in socially sensitive domains. While rich literature on mean fa...

Debiasing with Sufficient Projection: A General Theoretical Framework for Vector Representations(NAACL 2024)

Pre-trained vector representations in natural language processing often inadvertently encode undesirable social biase...