The project seeks to understand and minimize gender and ethnic biases in the AI-driven labor market processes of job advertising, hiring, and professional networking. We further aim to develop ‘responsible’ AI that mitigates biases and attendant inequalities, by designing AI algorithms and development protocols that are sensitive to such biases. The empirical context of our investigation includes these labor market processes in organizations and on digital job platforms.
Our project comprises two interlinked work packages that respectively understand the different dimensions of bias from a multi-stakeholder perspective (e.g. employer, employee, digital platform developer) through in-depth data mining and qualitative investigations when AI algorithms are used in the labor market processes of job advertising, hiring, and professional networking; and test/design new AI algorithms to mitigate them and create protocols for their development and implementation.
Potential ‘biases’ produced by AI technologies may significantly undermine labor market equality and stymy equitable and sustainable socio-economic development. BIAS’s objectives speak directly to multiple national priority agendas in both the UK and Canada - gender pay gap, ethnic/racial disparity, and digital and industrial strategy.
As both the UK and Canada look to embrace digital transformations as part of their national (economic and industrial) strategies, our focus on the implications of such transformations for labor market equalities and our objective to reduce such inequalities through the responsible development and deployment of AI promises a broad range of impacts, which are pertinent to the future of labor relations, economic competitiveness, human resource management, and industrial strategies.
Our work was supported by the Economic and Social Research Council (ESRC ES/T012382/1) and the Social Sciences and Humanities Research Council (SSHRC 2003- 2019-0003) under the scheme of the Canada-UK Artificial Intelligence Initiative. The project title is BIAS: Responsible AI for Gender and Ethnic Labour Market Equality
With widening deployments of natural language processing (NLP) in daily life, inherited social bi...
Despite progress towards gender equality in the labor market over the past few decades, gender se...
Algorithmic fairness has received increased attention in socially sensitive domains. While rich l...
Pre-trained vector representations in natural language processing often inadvertently encode unde...


