Daily Management Review

Google Directs Its Scientists To Have A “Positive Tone' In AI Research Papers: Reports


12/23/2020




Google Directs Its Scientists To Have A “Positive Tone' In AI Research Papers: Reports
Launching of a "sensitive topics" review by Alphabet Inc's Google earlier this year was an attempt of the company to tighten control over its scientists' papers. The company has asked authors of such paper to not cast its technology in a negative light in at least three cases, said reports based on information derived from internal communications of the company and interviews with researchers and authors of the papers.
 
Researchers should consult with legal, policy and public relations teams of the company prior to working on research topics such as face and sentiment analysis and categorizations of race, gender or political affiliation under the new review procedure of Google, said reports quoting parts of the internal webpages of the company that explain the policy.
 
"Advances in technology and the growing complexity of our external environment are increasingly leading to situations where seemingly inoffensive projects raise ethical, reputational, regulatory or legal issues," one of the pages for research staff reportedly told the media.
 
According to reports., this policy was implemented by Google in June this year.
 
No comment on the issue was available from Google.
 
Reports further quoted current and former employees saying that an additional round of review and scrutiny is added to Google's standard review of papers that look for shortcomings such as disclosing of trade secrets, by the "sensitive topics" process.
 
Investigations in later stages have also been carried out by Google officials for some projects. The news agency Reuters cited an internal correspondence of Google that said that shortly before publication this, a study on content recommendation technology, was reviewed by a senior Google manager. The company asked the author to "take great care to strike a positive tone" for the company’s technology in the paper. "This doesn't mean we should hide from the real challenges" posed by the software, the manager reportedly added in the directions to the author of the paper.
 
Authors were also asked that the paper should be "updated to remove all references to Google products", according to reports quoting subsequent correspondence from a researcher to reviewers.
 
They believe Google is starting to interfere with crucial studies of potential technology harms, according to reports quoting staff researchers, including senior scientist Margaret Mitchell.
 
"If we are researching the appropriate thing given our expertise, and we are not permitted to publish that on grounds that are not in line with high-quality peer review, then we're getting into a serious problem of censorship," Mitchell said.
 
Google’s scientists have "substantial" freedom, the company states on its public-facing website.
 
Authorities in the United States and elsewhere have been prompted to propose rules for the use of artificial intelligence technologies because of the explosion in research and development of it across the tech industry. Some of the concerns and proposal have cited scientific studies that show that biases are perpetuated or privacy is eroded by facial analysis software and other AI technology.
 
(Source:www.nasdaq.com)