Facebook, Instagram, and Twitter have started identifying and tagging content generated by artificial intelligence

From next month, the content produced with artificial intelligence on Instagram, Twitter and Facebook will be identified and tagged according to the information obtained; Meta has released its new policy on AI-generated content on Facebook, Instagram, and Twitter.

Now, Meta has published a new set of rules that will apply to AI-generated content across its platforms, including Facebook, Instagram and Threads.

The world is currently debating whether governments should regulate AI on their platforms, and one popular opinion is that AI content should be labeled to let people know which The content sections are made using artificial intelligence. While the debate is currently ongoing, Meta has decided to begin tagging AI content on Facebook, Instagram and Threads after receiving feedback from the moderators.

How to identify the content produced by artificial intelligence

Meta uses the "Made with AI" tag on photos, videos, images, and sounds that have been created using artificial intelligence. There are two ways Meta can tell if you've uploaded AI-generated content: One way is by detecting AI-generated content by detecting "industry-shared signals." It does, and another is that it allows users to declare that the content they upload was created by AI.

Meta is particularly careful when it comes to content that is of greater public interest. Essentially, a more prominent label is applied to AI content, which risks misleading people about things that are very important. That said, the company didn't go into detail about what determines how important a piece of content is.

Time to start tagging content created with artificial intelligence

Meta's new artificial intelligence content policy will be implemented from next month. While Meta will no longer remove manipulated media from its platforms, community policies and standards will still apply to anything posted on its platforms, including artificial intelligence-generated media. "Most stakeholders agreed that removal should only be limited to the highest-risk scenarios where the content could be linked to harm, as generative AI is becoming a mainstream tool for creative expression," Meta wrote in its official blog post. .