Instagram is reportedly gearing up to introduce a new feature that will inform users if an image has been generated using artificial intelligence (AI). The move comes as part of Meta’s commitment to using AI responsibly and promoting transparency within the AI community. Reverse engineer Alessandro Paluzzi recently shared screenshots showcasing labels that will indicate whether an image was created using generative AI tools. These labels are intended to provide users with information about the origin of the image and will read, “The creator or Meta said that this content was created or edited with AI” and “content created with AI is typically labeled so that it can be easily detected.”
The inclusion of AI labeling aligns with Meta’s efforts to take responsible measures in AI usage and safeguard user interests. During a meeting with US Vice President Kamala Harris, Meta pledged to use watermarking on AI-generated content to enhance transparency and awareness among users.
Meta has been taking a unique approach to AI by making its AI tools open source, which sets it apart from other companies like Google and OpenAI, who exercise more caution in sharing their AI technologies. This approach allows developers and companies to access, copy, modify, and reuse Meta’s AI tools for their own purposes. In February 2023, Meta announced that its Large Language Model Meta AI (LLaMA) would be available to organizations affiliated with government, civil society, academia, and industry research laboratories worldwide. Additionally, Meta has formed an AI partnership with Microsoft.
The potential introduction of AI labeling in Instagram highlights Meta’s efforts to make its AI more consumer-facing. The company appears to be actively working on various generative AI features for Instagram, including labels that allow creators to identify images “generated by Meta AI.” These efforts aim to assist users in discerning when AI was used to create content, especially as generative AI tools become more prevalent.
As generative AI gains popularity, concerns have arisen regarding the potential misuse of such technology, leading researchers and policymakers to seek measures to address misinformation and misleading content. In response, seven AI companies, including Meta, recently pledged to adopt AI safety measures, such as watermarks for AI-generated content.
While specific details about Meta’s consumer-facing generative AI plans are yet to be fully disclosed, Meta’s CEO, Mark Zuckerberg, hinted at various AI-powered products in the pipeline during a recent quarterly earnings call. These products could potentially offer creative tools, agents acting as assistants or coaches, and ways to interact with businesses and creators, all powered by the LLaMA model.