Select Page

Post written by Korinna Sjoholm

Korinna is a Partner at boutique Executive Search firm The Miles Partnership and has spent her career leading search mandates across the Industrial sector.

I heard a joke made last week by a popular comedian. Essentially, he highlighted the irony that just as we are edging closer to achieving gender equality, AI has the potential to replace all of the roles we have fought to secure with robots. Perhaps in a few years it will no longer be gender or BAME equality that we seek, but rather human versus robotic.

Could 100 years of fighting for our rights to be seen as equal in the workplace, and indeed in society, really come down to finally achieving your board appointment, only then for your position to be replaced by artificial intelligence? And what if the AI itself is encoded with a bias towards favouring white men? The irony won’t be lost on many women.

Forbes recently published an article highlighting research undertaken by Joy Buolamwini at MIT, who had uncovered a disturbing trend. Following some personal experiences with early-concept AI, she undertook a study across AI-powered facial recognition systems powered by Microsoft, IBM and Face++. When asking the systems to discern between 1000 faces, they made 34% more errors in females and especially non-white females.

Joy went on to hypothesise a future where AI makes the decision on mortgage applications, loans and essentially most financial matters which are already in our human system biased towards white men receiving more positive outcomes than BAME applicants or women. If AI is biased towards men, just as the current ‘human system’ is proven to be biased towards men, are we only reinforcing existing trends by adopting AI?

Joy’s study found that facial recognition software was discriminating against females, albeit discretely. It was a manifestation of the data being inputted by predominantly male programmers. It seems logical to assume that AI more generally is simply an extension of those who code it, and the information going in will determine the decisions coming out. What if that information is incorrect however, or if the data it used to formulate decisions is biased in some way?

In 2016 Microsoft’s Chatbot Tay was famously taken offline within 24 hours of it being released experimentally onto Twitter. A quick search online reveals that Tay was built with huge redundancy and hacking was deemed inevitable, however Microsoft programmers didn’t anticipate the level of ‘hate’ being fed into Tay which they developed specifically to learn from its human interactions. Within 16 hours it was taken offline for likening feminism to cancer among other outrageously offensive comments.

In 2015 both the Guardian and Washington Times reported on research conducted by Carnegie Mellon into gender bias in online executive job searching. The study built an automated tool called AdFisher that pretended to be a series of male and female job seekers. Their pool of 17,370 fake profiles were shown 600,000 adverts which the team tracked and analysed.

The Ad Fisher team found that when Google presumed users to be male job seekers, they were much more likely to be shown ads for high-paying executive jobs. Google showed the ads 1,852 times to the male group and just 318 times to the female group. What is particularly worrying about this, is that the fake users had only been given access to job sites. There was no data to suggest whether a user was male or female, the AI simply guessed and judged accordingly.

You may argue that at a certain level, executives tend not to search for new appointments on job sites, preferring instead to use tools such as LinkedIn to remain visible to prospective employers and head hunters. But what if LinkedIn is also biased towards males? In 2016 the business was forced to apologise for a glitch that autocorrected female names to male names in search results.

Shockingly, online news platform The Conversation wrote in March 2018 that LinkedIn was less likely to offer popups to female users for high paying jobs than it was male users.  Quoting an earlier article in TechRepublic, they claimed that high-paying jobs were not displayed as frequently for women as they were for men. Anu Tewary, chief data officer for Mint at Intuit wrote;

“Again, it was biases that came in from the way the algorithms were written. The initial users of the product features were predominantly male for these high-paying jobs, and so it just ended up reinforcing some of the biases,”.

AI is coming, and yes it will inevitably reshape many businesses and their workforces. It remains to be seen whether bias can be taken out of the system and if it can in fact support women as opposed to hinder them. One observation also made by The Conversation suggests that whilst this is indeed a huge concern, the concept of AI might actually play to the female advantage.

It is widely accepted that women can offer some increased emotional intelligence over their male peers, and this EQ may become a valuable commodity in our new post AI workplace. We will see a greater requirement for leaders to understand human behaviour in ways that machines will struggle to. Leadership roles will require more and more understanding of social contexts, empathy and compassion, and it is here that applicants with demonstratively higher levels of emotional intelligence will thrive.

Perhaps AI will eradicate the need for many of the roles women have fought for generations to be able to compete on a level playing field for. It may even actively work against us on some level, reinforcing bias when it comes to gender and race. It will not however remove the need for humans to police the system and consider ethics and empathy, it is here that perhaps women may at last succeed in the battle for gender equality.

Read other Blog posts of the MMM Network