For photos produced with DALL-E 3, OpenAI has begun implementing the Coalition for Content Provenance and Authenticity (C2PA) requirements. According to the company, photographs created with DALL-E 3 will soon have watermarks added. These watermarks will comprise updated metadata and a visual watermark, enabling anyone to use C2PA to confirm the image’s origins.
The purpose of watermarking a picture is to enable consumers to rapidly determine whether the image was produced by humans or by artificial intelligence (AI). Companies such as Adobe and camera manufacturers have long adhered to the C2PA standard. On DALL-E 3-generated photos via the web, API, and mobile app, OpenAI has confirmed that a visible watermark will be present.
The watermark will contain information such as the date of image creation and the C2PA logo in the upper left corner of the picture. ChatGPT Plus users can currently access the DALL-E 3 image generator for $20 per month.
According to OpenAI, there will be no impact on image quality or latency performance when a watermark is added to an AI-generated image. Nevertheless, when an image is created via an API, its size increases by three to five per cent, and when it is created through the ChatGPT platform, its size increases by thirty-two per cent.
But, by physically cropping the image and adjusting the information, one can still alter the image. The metadata is deleted from AI-generated images when they are screenshotted or uploaded to social media.
Also Read: ChatGPT soon replace Google Assistant on your Android phone
Additionally, Microsoft adds a watermark to photos created using the GPT-powered Bing Image Creator. When an image is altered with the built-in AI editing features on the Galaxy S24 series—which will soon be accessible on the Galaxy S23 series—Samsung applies a watermark and updates metadata. Additionally, Meta just unveiled a function that allows AI-generated content to have invisible watermarks added to it.
Why is watermarking AI photos so important?
Artificial intelligence (AI)-generated or manipulated images have long been a source of problems, ranging from deepfakes of celebrities to impersonation. It will be simple for audiences who are not tech-savvy to determine whether the image is an original creation or if artificial intelligence (AI) was used to generate it, thanks to an obvious watermark that will help stop the spread of false information.