Responsible AI for labor market equality

BIAS is an interdisciplinary project to understand and tackle the role of AI algorithms in shaping ethnic and gender inequalities in the labor market, which is now increasingly digitized.

BIAS

The project seeks to understand and minimize gender and ethnic biases in the AI-driven labor market processes of job advertising, hiring, and professional networking. We further aim to develop ‘responsible’ AI that mitigates biases and attendant inequalities, by designing AI algorithms and development protocols that are sensitive to such biases. The empirical context of our investigation includes these labor market processes in organizations and on digital job platforms.

Our project comprises two interlinked work packages that respectively understand the different dimensions of bias from a multi-stakeholder perspective (e.g. employer, employee, digital platform developer) through in-depth data mining and qualitative investigations when AI algorithms are used in the labor market processes of job advertising, hiring, and professional networking; and test/design new AI algorithms to mitigate them and create protocols for their development and implementation.

OverView

Potential ‘biases’ produced by AI technologies may significantly undermine labor market equality and stymy equitable and sustainable socio-economic development. BIAS’s objectives speak directly to multiple national priority agendas in both the UK and Canada - gender pay gap, ethnic/racial disparity, and digital and industrial strategy.

As both the UK and Canada look to embrace digital transformations as part of their national (economic and industrial) strategies, our focus on the implications of such transformations for labor market equalities and our objective to reduce such inequalities through the responsible development and deployment of AI promises a broad range of impacts, which are pertinent to the future of labor relations, economic competitiveness, human resource management, and industrial strategies.

Funding sources

Our work was supported by the Economic and Social Research Council (ESRC ES/T012382/1) and the Social Sciences and Humanities Research Council (SSHRC 2003- 2019-0003) under the scheme of the Canada-UK Artificial Intelligence Initiative. The project title is BIAS: Responsible AI for Gender and Ethnic Labour Market Equality

Word Embeddings via Causal Inference: Gender Bias Reducing and Semantic Information Preserving (AAAI 2022)

With widening deployments of natural language processing (NLP) in daily life, inherited social bi...

Balancing gender bias in job advertisements with text-level bias mitigation(Frontiers in big Data 2022)

Despite progress towards gender equality in the labor market over the past few decades, gender se...

Conformalized Fairness via Quantile Regression(Neurips 2022)

Algorithmic fairness has received increased attention in socially sensitive domains. While rich l...

Debiasing with Sufficient Projection: A General Theoretical Framework for Vector Representations(NAACL 2024)

Pre-trained vector representations in natural language processing often inadvertently encode unde...

University of Alberta logo

University of Alberta

Lancaster University logo

Lancaster University

Essex University logo

Essex University