Artificial Intelligence (AI) is developing fast – but how should it be used? Over the past month, the ethical use and development of AI has been heavily scrutinised as policy makers, intellectuals, and industry leaders debate whether or not a moratorium should be imposed on AI’s development, or whether AI should be embraced in the name of economic growth.
At its best, AI offers greater processing power, insights into data, and in some instances, a perspective that does away with certain human biases.
Yet the reality of AI is that because it is trained on decisions previously made by humans – decisions that themselves embody prejudice – AI tools are biased. This means that companies and other entities seeking to use AI to achieve meaningful, non-discriminatory outcomes, need to approach doing so critically.
Enter: the movement for ethical AI.
Advocates for ethical AI, such as Australian research organisation the Gradient Institute, argue that “with AI systems for automated decision-making proliferating rapidly […] it is now important to explore how to ensure such systems do not perpetuate systemic inequality or lead to significant harm to individuals, communities or society”.
Governments and other regulators are lagging in their understanding of AI and their frameworks for governing it. Therefore, companies and businesses need to step up.
Recruitment is a good example of ethical AI’s importance in business contexts. Classic recruitment methods can be biased – and, consequently, companies consider automating recruitment processes in a bid to reduce human bias. The problem is that given AI recruitment systems are trained on measures of existing employees, they incorporate many of the biases established through human-led hiring practices.
In practice this means that if an AI-powered recruitment tool is fed data that incorporates biases, it will recreate these. For example: a company might employ AI-powered software to read the CVs and cover letters of these applicants. To do this, they first train this AI tool lots of information about the company and who already works there. Then, based on this information, they will ask the AI tool to go and read CVs and cover letters and decide what candidates could be best suited for the role and the company. However, if the AI tool is judging candidates based on how similar they are to existing employees – who were originally chosen by humans with biases for candidates who graduated from a particular university, for example the AI tool may deem people with that particular educational background better suited for the role. The tool may incorporate such a metric into its decision-making through identifying demographic patterns among existing employees, even if present-day leaders of the company fed the tool employee data with the aim of the tool identifying commonalities across skills rather than demographics. In short: AI recruitment tools risk perceiving existing, flawed patterns in hiring practices and replicating these, because they lack the human judgement determining that these particular patterns are ones that companies wish to leave behind.
According to Edward Santow, Director of Policy and Governance at the Human Technology Institute, the challenge facing companies is how to integrate machine-led and human-led decision-making in recruitment. Algorithms excel at making quantitative decisions; humans tend to do better with discretionary ones. Socio-technical systems in recruitment may therefore be the best solution moving forward, tempering both machine and human weaknesses.
In practice, this might mean that AI tools are used in the initial candidate screening stage during recruitment. They might effectively scan CVs for essential technical skills, for example, or particular certificates or qualifications. AI could also be used in recruitment to offer feedback to more candidates, as it could provide feedback swiftly and effortlessly, whereas the same task is typically laborious and time-consuming for humans.
Past the initial screening stage, however, humans are likely best-placed to evaluate CVs and cover letters with the care and consideration they deserve. AI tools are not yet sophisticated enough to make decisions about whether a company would do better to hire a less experienced person with an outstanding attitude, a more experienced person who ticks all the conventional boxes and can hit the ground running, or a wild card who only meets half the candidate criteria but who could bring novel ideas and skills to the role. When evaluating such candidates, there is typically no easy way of comparing them. In the end, one of the most important facets of the decision is whether the humans doing the hiring wish to work with the candidate or not – and that element of recruitment is something that can never quite be automated away. This means that for all AI can help streamline some aspects of the recruitment process, humans remain important, especially where more complex decisions are involved.
Companies who want to use AI more ethically can also audit how much they already use AI including indirect use, for instance, how company job postings are algorithmically boosted onto certain user social media feeds and invisible to others.
Meanwhile, some AI tools simply aren’t worth the bother at all. For example, on the more controversial side are tools that purport to read facial expressions: such tools are controversial as they rely on dominant cultural norms and risk enshrining biases towards marginalised groups including women and people of colour.
Well-intentioned employers wish to hire quality candidates. AI can help in this mission; however, for this to happen, it must remain a tool rather than arbiter. Human applicants of all stripes deserve this much.