The summit also followed on swiftly from the weekend news about a regulatory investigation of Goldman Sachs’ credit card practices after a prominent software developer called attention to differences in the credit lines of the Apple Card (which is underpinned by Goldman Sachs) for male and female customers.
If you think about similar potential issues and apply them to, for instance, who receives universal credit or housing benefits, but with those people having no redress, then the danger is clear, said Carly Kind, director at the Ada Lovelace Institute, which describes itself as an independent research and deliberative body with a mission to ensure data and AI work for people and society. With facial recognition, she said, “there is a sense of inevitability”, with people feeling they have no control. In this sphere, the sharing of data is a passive one, she observed – people “leak the data”.
There was a similar observation from Michael Veale, digital rights academic at UCL, about lack of trust with regards decline buttons on websites and apps, with people understandably taking little or no interest because they feel they have no real options. The fact that online and app tracking technologies were made illegal ten years ago means they are operating outside the law. Apps typically have upwards of ten trackers, he said.
Often the website hosts have no idea of the problems, such as publishers not knowing what’s going on with advertising on their sites. Veale pointed to last year’s infection by malware that forced the computers of visitors to thousands of websites to mine for bitcoins, including those belonging to NHS services, the UK government’s Information Commissioners Office (ICO) and several English councils.
Then there’s the huge and complex area of bias, whether on gender, ethnicity, religious or other grounds. AI technologist, Kriti Sharma, cited her experience of being questioned far less about her competence when submitting code on online tech forums if she hides her gender.
What would we think, she asked, about someone who thought things like this:
- A black or white person is less likely to pay off their loan on time;
- A person called John makes a better programmer than a person called Mary;
- A black man is more likely to be a repeat offender than a white man.
“A pretty sexist, racist person, right?” But these are real AI-based decisions, based on the biases it has learned from humans. There are dangers too in advertising: Consider the AI that ensures gambling addicts are pushed adverts for online casinos. Or consider that job adverts in the US for salaries above $200,000 are more likely to be pushed to males than females.
It needs awareness of our biases, diverse tech teams (around twelve per cent of people working in AI and machine-learning are women, said Sharma) and ensure diverse datasets.
The theme was taken up with gusto by Caroline Criado Perez, author of “Invisible Women: Exposing Data Bias in a World Designed for Men”. She dissected the deep-seated bias against “atypical” female bodies, leading to women being ignored in areas such as medical research, medication and apps and transport research and design (such as the car crash dummy).
Criado Perez felt the bias was not deliberate – “misogynists are just not that smart” – but is ingrained, with the typical reference being a Caucasian man of 70kg. To fix this, proper sex disaggregated data is needed and diversity is fundamental, she said.
Facial recognition has its own issues, with algorithms having a much harder time recognising people with darker skin. Even top-performing offerings show a ten-fold difference.