Sunday, June 26, 2016

Is Artificial Intelligence Sexist and Racist? A White Guy Problem?

Kate Crawford, a Principal Researcher at Microsoft Research New York City, on the advisory board, of the Information Program at George Soros' Open Society Foundation, and  co-chairwoman of a White House symposium on society and A.I., has taken to the pages of the, presumably white, but gray lady, The New York Times, to warn that there are:
 very real problems with artificial intelligence today, which may already be exacerbating inequality in the workplace, at home and in our legal and judicial systems. Sexism, racism and other forms of discrimination are being built into the machine-learning algorithms that underlie the technology behind many “intelligent” systems that shape how we are categorized and advertised to.
She included this as part of her anecdotal evidence:
Nikon’s camera software...misread images of Asian people as blinking...
What the hell is this? Does Crawford seriously think Nikon is a technologically self-hating  company?

The company is based in Tokyo. The president is Kazuo Ushida:


No doubt, as artificial intelligence programs are developed they will have to be tweaked in many ways.

But for Crawford it is all about straight white male superiority--despite her using one example coming from a company based in Asia with an Asian president. 

She continues:
 Sexism, racism and other forms of discrimination are being built into the machine-learning algorithms that underlie the technology behind many “intelligent” systems that shape how we are categorized and advertised to.
Currently the loudest voices debating the potential dangers of super intelligence are affluent white men, and, perhaps for them, the biggest threat is the rise of an artificially intelligent apex predator.
But for those who already face marginalization or bias, the threats are here.
Yes, of course,  artificial intelligence software glitches always go in favor of straight white male superiority and against "marginalized groups," according to Crawford. We would never see, for example, an artificial intelligence virtual assistant glitch that accidentally provides us with a transgender assistant.

But wait.

In the same edition of the white gray lady, Henry Alford reports on his experience using a virtual assistant:
I recently used a virtual assistant named Amy for 10 days....

Hailing from a New York start-up called x.ai, Amy...will set up meetings for you. Once someone has agreed to meet with you at a certain place, you cc Amy, and independently of you she’ll go back and forth with the other person to determine a mutually convenient time, and then help you to put that time on your calendar....

Allowing someone to do your vetting requires trust. I applaud x.ai for including, at the bottom of each of Amy’s emails, the information that Amy Ingram is a form of artificial intelligence...

The strangest moment I’ve had with Amy, though, came when I had her set up an appointment for a phone interview with an x.ai employee who also uses the company’s virtual assistant — in this case, Andrew — for making appointments.

After Amy and Andrew had set up the appointment, I asked Amy why I didn’t see the appointment on my calendar; strangely, she wrote back as Andrew. I thought, not only is my assistant invisible, unpredictable, occasionally moody, and incorrigible — she is also trans.
Bottom line: It is really a stretch to think there are built-in in artificial intelligence glitches that go in only one direction.The real pre-programming problem is in the Kate Crawford-types who see at all times sexism and racism in the most bizarre places and only sexism and racism.

-RW

4 comments:

  1. On the other hand, simple AIs are not bound by political correctness (yet). If you asked the AI which ethnic group in America commits most of the homicides in America the AI would be you the correct, albeit "racist" answer.

    ReplyDelete
  2. That last line about trannies is triggering.

    ReplyDelete
  3. The MS researcher...man, that's some tortured logic since more than half of AI researchers are Asians. They're white supremacists too?

    ReplyDelete
  4. Quick aside, but the term "asians" is something only used by Westerners. In Asia, there are Chinese, Malays, Japanese, Indonesians, etc. I have never heard one person in Asia refer to someone as "Asian" so just right there this woman is racist...if that is up for discussion.

    A bit of truth though: AI is based on data, and the more data you have the more the algorithm will skew that direction. For instance I work in AI in Southeast Asia and my algorithm will - using the data we have now - predict results based on the local cultures and preferences at large more so than, say, "white" preferences (white is in quotes because "asians" don't call people white. There are Americans, Brits, French, etc.).

    In other words, besides her racism against the various people living in Asia, her bias is fundamentally the national borders of the US. If you compile data globally, I can quite assure her it will skew female and non-white.

    ReplyDelete