After poring over 840 billion words from the Internet to get a handle on how humans speak, an artificial intelligence program thinks just like a human.

And what it reveals about human nature is frightening.

This supposedly neutral collage of 1s and 0s is now riddled with the same biases and flaws that have made human societies unequal, according to a new paper published in Science.

It thinks that women are meant for domestic work and that, if they go to college, they should pursue the arts and humanities.

Read More: This ‘Robot Lawyer’ Is Helping Refugees Apply for Asylum

Men, meanwhile, are associated with pursuing careers and science and math.

The AI associates European American names with positive terms and African American names with negative terms — further proof of the resume discrimination phenomenon, the fact that a resume’s success often depends on the name of the person.

How did this happen? How did something that began with no bias come to be an unsparing reflection of human life?

Well, the scientists behind this development used a technique called “word-embedding” to teach the system.

“Word-embedding” means that when the AI vacuums up billions of words, it’s also observing context, the syntactic and cultural nests that words live in. The program looks at which words are next to each other, and through mind-bending statistical analysis, determines which words are most likely to be associated with each other.

When this process is repeated billions of times, some pretty reliable patterns emerge.

Read More: Alan Turing Did a Lot More Than Crack Codes

Unfortunately, the patterns reveal some ugly human biases. The above-mentioned prejudices were statistically prominent within the data that was processed.  

When these biases are ingrained in a robot, they’re especially dangerous, because robots do not have the same (often faulty) moral checks and balances as humans.

“A danger would be if you had an AI system that didn’t have an explicit part that was driven by moral ideas, that would be bad,” lead researcher Joanna Bryson told The Guardian.

This isn’t the first time that algorithms started off neutral and quickly became Frankenstein-like monsters after interacting with humans.

The most notorious examples come by way of Microsoft, which often unleashes Twitter robots that are meant to learn from and converse with Twitter users. One robot named Tay, after spending some time on the platform, became a raving bigot.

It’s a good thing that the word-embed AI wasn’t dependent on Twitter because the biases would likely be far uglier. But even with its large data set, it became compromised.

And if such a prejudiced system were ever implemented on a large scale to make decisions and pass judgement, the consequences could be dystopian, making all of humanity’s worst tendencies harder to overcome.

Read More: Racial, Gender, Wealth Inequality — Can a Universal Basic Income End Them All?

One can imagine a system empowered with an AI of this kind trawling bank loan applications, college entrance essays, immigration surveys, dating websites, and Linked-In profiles, passing tainted judgement, and determining fates.

On the other hand, one can also imagine checks and balances being built into such a system that would cancel out these biases and make this AI a helpful tool.

As always, robots — just like power —  are not inherently good or bad. It’s all about how you use them. Or, in this case, teach them.  

“A lot of people are saying this is showing that AI is prejudiced,” Bryson said. “No. This is showing we’re prejudiced and that AI is learning it.”

News

Defeat Poverty

Robots Are Racist and Sexist Because So Are the Humans They're Learning From

By Joe McCarthy