Daily Management Review

Experts Want To Fix Problem Of AI Hurting People Of Color And The Poor


07/24/2018




Experts Want To Fix Problem Of AI Hurting People Of Color And The Poor
While great opportunities are brought in by new technology, so does multiple problems.
 
Smartphones for example, has enabled pocketing of access to infinite knowledge but it has also brought in the problem of tech addiction.
  
And it is similar with artificial intelligence, which has the potential to bring about fundamental changes to the world while also giving rise to increased exclusion and racial bias.
 
The crashing of self-driving cars and the advent of killing machines have primarily been the focus of debates about the ill effects of artificial intelligence.
 
However, there are many researchers who claim that the poor, the disenfranchised, and people of color would be adversely affected by it and thereby posing a greater challenge for society at large.
 
"Every time humanity goes through a new wave of innovation and technological transformation, there are people who are hurt and there are issues as large as geopolitical conflict," said Fei Fei Li, the director of the Stanford Artificial Intelligence Lab. "AI is no exception."
 
There are current issues and not one that would only occur in the future. The likes of Siri and Alexa is able to work because of speech recognition enabled by AI. Google Photos and Google Translate also depend on AI. The technology also has a role to play in Amazon pushing products, Pandora suggesting preferred songs and Netflix recommending movies. It has a major role to play in the development of autonomous vehicles.
 
Machine learning is one area of AI where decisions are made by machines on the basis of analysis of massive amounts of data. It also enables the recognition of pattens by the machines.
 
"In AI development, we say garbage in, garbage out," Li said. "If our data we're starting with is biased, our decision coming out of it is biased."
 
And because technologies like facial recognition become more widely used – as in law and order, border security and even recruitment, and gains popularity, therefore such issued need top be addressed on an increasingly urgent basis.
 
Technical approaches can also be of help. for example, finding and identification of any bias and correcting of problematic models by analyzing data can be made by the Fairness Tool, developed by Accenture.
 
"One naive way people were thinking about removing bias in algorithms is just, 'Oh, I don't include gender in my models, it's fine. I don't include age. I don't include race,'" said Rumman Chowdhury, who helped develop the tool.
 
"Every social scientist knows that variables are interrelated," said. "In the US for example, zip code [is] highly related to income, highly related to race. Profession [is] highly related to gender. Whether or not that's the world you want to be in that is the world we are in."
 
The problem can also be addressed by engagement of people from diverse backgrounds in the job of creating artificial intelligence and then application of the outcome to a host of areas – ranging from policing to shopping to banking. This diversity should not be limited to only the engineers and computer scientists creating those tools but should also include those groups of people who deliberate the usage of the tools. they are used.
 
"We need technologists who understand history, who understand economics, who are in conversations with philosophers," said Marina Gorbis, executive director of the Institute for the Future. "We need to have this conversation because our technologists are no longer just developing apps, they're developing political and economic systems."
 
(Source:www.money.cnn.com)