Daily Management Review

Meta Updates The Laws Governing Deepfakes And Other Modified Material


04/06/2024




Meta Updates The Laws Governing Deepfakes And Other Modified Material
Ahead of the US elections that will put its capacity to monitor misleading content produced by emerging artificial intelligence technology to the test, Facebook's parent company Meta announced significant modifications to its regulations on digitally created and edited media on Friday.
 
Vice President of Content Policy Monika Bickert announced in a blog post that the social media behemoth will begin labelling AI-generated films, photos, and audio uploaded on its platforms as "Made with AI" in May. This expands on a prior policy that only addressed a small portion of doctored content.
 
Regardless of whether the information was produced using artificial intelligence (AI) or other toosl, Bickert stated that Meta will additionally attach distinct and more noticeable labels to digitally altered media that offers a "particularly high risk of materially deceiving the public on a matter of importance."
 
The company's handling of modified content will change as a result of the new strategy. It will shift from an approach that focuses on eliminating a certain subset of posts to one that maintains the material while informing users about the creative process.
 
In the past, Meta revealed a plan to use invisible identifiers included into files to identify photos created with generative AI technologies from other companies, although it did not provide a start date at the time.
 
The new labelling strategy will be applied to content uploaded on Meta's Facebook, Instagram, and Threads platforms, a company representative said Reuters. Different regulations apply to its other services, such as Quest virtual reality headsets and WhatsApp.
 
Samsung announced on Friday that its earnings expectations are higher than those of analysts.
 
The more noticeable "high-risk" markings will be applied by Meta right away, the spokesman said.
 
The modifications are made months before the November presidential election in the United States, which industry experts fear could be impacted by emerging generative AI capabilities. In countries like Indonesia, political campaigns have already started implementing AI technologies, going beyond the recommendations of generative AI industry leader OpenAI and suppliers like Meta.
 
Meta's monitoring board deemed the company's current policies regarding manipulated material "incoherent" in February following its examination of a Facebook video featuring U.S. President Joe Biden that included edited footage that falsely implied the former had acted inappropriately.
 
The video was allowed to remain up because misleadingly changed videos are only prohibited by Meta's current "manipulated media" policy if they were created using artificial intelligence or if they make people appear to say things they never actually uttered.
 
The board stated that videos showing individuals doing things they never actually did and audio-only content should also be covered by the policy, as non-AI content is "not necessarily any less misleading" than information produced by AI.
 
(Source:www.economictimes.com)