Big techs lay off teams that handled AI ethics – 03/29/2023 – Tec

Big techs lay off teams that handled AI ethics – 03/29/2023 – Tec

[ad_1]

Big tech companies have been cutting staff from teams dedicated to assessing ethical issues linked to the use of artificial intelligence, raising concerns about the safety of the new technology as it is widely adopted in consumer products.

Microsoft, Meta, Google, Amazon and Twitter are among companies that have cut members of their “responsible AI teams”, which advise on the safety of consumer products that use artificial intelligence.

The number of affected employees remains in the tens and represents a small fraction of the tens of thousands of tech workers laid off in response to the broader industry downturn.

The companies said they remain dedicated to releasing safe AI products. But experts said the cuts were worrisome, as potential abuses of the technology are being uncovered as millions of people begin experimenting with AI tools.

Their concern was heightened after the success of the ChatGPT chatbot launched by Microsoft-backed OpenAI, which prompted other tech companies to launch rivals such as Google’s Bard and Anthropic’s Claude.

“It’s shocking how many responsible AI staff are being laid off at a time when, arguably, you need these teams more than ever,” said Andrew Strait, a former ethics and policy researcher at DeepMind, owned by Alphabet, and associate director from the research organization Ada Lovelace Institute.

Microsoft disbanded its entire ethics and society team in January, which had led the company’s initial work in the area. The tech giant said the cuts totaled fewer than 10 jobs and that Microsoft still had hundreds of people working in its responsible AI department.

“We have significantly increased our Responsible AI efforts and strive to institutionalize them across the enterprise,” said Natasha Crampton, Director of Responsible AI at Microsoft.

Twitter has cut more than half of its staff under its new owner, billionaire Elon Musk, including the small AI ethics team. His previous work included correcting a bias in Twitter’s algorithm that appeared to favor white faces when choosing how to crop images on the social network. Twitter did not respond to a request for comment.

Twitch, the Amazon-owned streaming platform, cut its ethical AI team last week, making all teams working on AI products liable for bias-related issues, according to a person briefed on the change. Twitch declined to comment.

In September, Meta disbanded its responsible innovation team of about 20 engineers and ethics experts tasked with evaluating civil rights and ethics across Instagram and Facebook. Meta did not respond to a request for comment.

“Responsible AI teams are among the only internal bastions Big Tech has to ensure that the people and communities impacted by AI systems are taken into account by the engineers who build them,” said Josh Simons, a former ethical AI researcher. of Facebook and author of “Algorithms for the People” [Algoritmos para o povo].

“The speed with which they are being phased out leaves Big Tech algorithms at the mercy of advertising imperatives, undermining the well-being of children, vulnerable people and our democracy.”

Another concern is that the large language models that power chatbots like ChatGPT are known for “hallucinating” – making false statements as if they were facts – and can be used for malevolent purposes such as spreading misinformation and cheating on exams.

“What we’re starting to see is that we can’t predict all the things that are going to happen with these new technologies, and it’s crucial that we pay some attention to them,” said Michael Luck, director of the Institute of Artificial Intelligence at King’s College London.

The role of internal AI ethics teams is under scrutiny as there is a debate about whether any human intervention in the algorithms should be made more transparent, with input from the public and regulators.

In 2020, photo app Instagram, owned by Meta, assembled a team to address “algorithmic justice” on its platform. The “IG Equity” team was formed after the police killing of George Floyd and with the intention of making adjustments to the Instagram algorithm to encourage discussion about race and highlight profiles of marginalized people.

“They are able to step in and change these systems and prejudices [e] explore technological interventions that will promote equity (…) but engineers should not decide how society is shaped,” said Simons.

Some employees charged with overseeing ethical AI at Google have also been laid off as part of broader cuts at Alphabet, of more than 12,000 employees, according to a person close to the company.

Google did not specify how many roles were cut, but said responsible AI remains a “top priority at the company and we continue to invest in these teams”.

The tension between the development of AI technologies in companies and their impact and safety had already surfaced at Google. Two AI ethics research leaders, Timnit Gebru and Margaret Mitchell, left in 2020 and 2021, respectively, after a highly publicized dispute with the company.

“It’s problematic when responsible AI practices are undercut by competition or to give the market a boost,” said Strait of the Ada Lovelace Institute. “Unfortunately, I see that this is exactly what is happening now.”

Hannah Murphy collaborated in San Francisco. Translated by Luiz Roberto M. Gonçalves

[ad_2]

Original Source Link