ChatGPT will tag the images generated by the DALL-E 3

In the modern era, when fraudsters use generative AI for scams or defamation of reputation, tech companies are developing methods to help users verify content, especially images. In line with its 2024 strategy to combat disinformation, OpenAI now includes provenance metadata in images created with ChatGPT on the website and through the DALL-E 3 API, with mobile counterparts receiving the same update by February 12.

The metadata complies with the open standard C2PA (Coalition for Content Provenance and Authenticity), and when such an image is uploaded to the Content Credentials Verify tool, its line of origin can be traced. For example, an image created using ChatGPT will show an initial metadata manifest indicating its origin from the DALL-E 3 API, followed by a second metadata manifest showing that it originated in ChatGPT.

Despite its sophisticated C2PA-compliant cryptographic technology, this verification method only works when the metadata remains intact; the tool is useless if you upload an AI-generated image without metadata – as is the case with any screenshot or image uploaded to social media. Not surprisingly, the current sample images on the official DALL-E 3 page also failed to produce results. On its FAQ page, OpenAI admits that it is not a panacea in the war against disinformation, but believes that the key is to encourage users to actively look for such signals.

While OpenAI’s latest efforts to combat fake content are currently limited to static images, Google’s DeepMind already has SynthID to digitally watermark images and audio created by AI. Meanwhile, Meta is testing invisible watermarking with its AI image generator, which may be less prone to interference.

Source engadget
You might also like
Comments
Loading...

This website uses cookies to improve your experience. We'll assume you're ok with this, but you can opt-out if you wish. Accept Read More