BRUSSELS ̵1; In a bid to crack down on online misinformation, the European Commission wants tech companies such as Google, Facebook and TikTok to start labeling content created by artificial intelligence without waiting for digital laws to come into effect.
Text, video and audio created and manipulated by artificial intelligence tools such as ChatGPT and DALL-E are spreading rapidly online. The commission is now calling on dozens of big tech companies that are part of its voluntary anti-disinformation charter to make it easier for people to separate fact from fiction.
“Signatories that have services with the potential to disseminate AI-generated misinformation should implement technology to identify such content and clearly label it to users,” Vera Jourova, vice president for value and transparency, said on Monday. ” First reported in Brussels Playbook,
Major online platforms and search engines such as Meta, Twitter and TikTok will have to identify generated or manipulated images, audio and videos with “key marks” as deep fakes by August 25. Digital Services Act (DSA) or face heavy multimillion-euro fines. Meanwhile, the European Parliament is pushing for a uniform rule to apply to all companies creating AI content, including text artificial intelligence actWhich can be implemented by 2025.
Jourova also wants companies like Microsoft and Google to build safeguards into their services, including Bard and BingChat, so bad actors can’t use so-called generative AI to do harm. He said that Google CEO Sundar Pichai told him that his company is currently developing such technologies.
He said the 44 participants in the Code of Conduct on Disinformation – including social media companies, fact-checking groups and advertising associations – would launch a new group to discuss how best to respond to new technologies.
Jourova also slammed for Twitter voluntary code skip Just a few months before the DSA came into force.
“We believe this is a mistake on the part of Twitter,” she said. “He chose confrontation, which was seen as very high in the commission.”
Code participants are required to release reports in mid-July with a detailed analysis of how they have prevented falsehoods from spreading on their networks and their plans to limit potential misinformation from generative AI.
Jacob Hanke Vela and Mark Scott contributed reporting.