Counterfeit digital material, including deepfakes, is increasingly blurring the line between what is real and what is fake, posing a growing threat to users. The global community is providing various methods to identify such fakes. One such approach is the C2PA standard, which enables embedding metadata into images and other media content to verify the presence of AI.
What Is C2PA?
C2PA is a standard that adds metadata to files that includes details such as who created the file, where it was created and how and when the file was edited. That information is secured to the file.
In 2024, a deepfake attempt occurred every five minutes, while digital document forgeries surged by 244 percent, compared with the previous year. The damages are huge—for example, fraudsters stole $35 million from a UAE company using forged emails and deepfake audio. This provides a reason to examine the current state of digital fakes and C2PA in more detail.
How the World Fights Digital Fakes
The Coalition for Content Provenance and Authenticity (C2PA), supported by Microsoft, OpenAI and other major companies, is taking strong steps to combat fraudulent digital content. Social networks, media platforms and content delivery networks are ready to broadly implement the C2PA innovations to identify fake material. What is the essence of this approach?
Previously, identifying a video deepfake could often be done by spotting obvious indicators, such as missing blinks or teeth, unnatural facial expressions or extra fingers. Now, however, it is not so straightforward. Technology has advanced to the point that even experts find it challenging to recognize deepfakes without additional supportive technical tools.
Moreover, the volume of digital fakes has reached a point where relying on human experts to verify every piece of content is no longer feasible. Automating this process has become essential.
The central concept of C2PA involves adding and securing metadata that reflects the source of the content and its history of modifications. In other words, any alteration to the file — such as Photoshop editing or compression — will be recorded in the metadata, which can be examined. This information includes:
- Details about the author and the file’s original creation date.
- Details about the original shooting location.
- Records of modifications, such as the application of filters, compression or other edits.
How C2PA Verifies Authenticity
The C2PA method is often referred to as a digital signature, as it makes it possible to recognize unique identifiers and thereby distinguish faceless content produced by neural networks.
IPTC can be viewed as a comparable technology, as it allowed the addition of tags to content, though those tags were more easily edited or replaced.
Below is an outline of how the C2PA authentication process works:
- A digital object (a photo or video) is captured and metadata is attached at the moment of creation. This metadata consists of both technical and non-technical information about the content. Technical characteristics often include color profile, shutter speed, camera settings at the time of shooting, etc. Non-technical data includes location, content producer designation, etc.
- Metadata becomes part of the image’s accessible information, indicating whether it was cropped, compressed or modified using an editor. This data must be entered and secured directly by the content processing applications themselves. Each recorded change to the image is backed by a digital signature.
- All details regarding the content’s origin and its modification history are contained in a special manifest, which is updated every time the file is altered.
- Information about the image’s original source is retained. Even after edits, it remains possible to determine, for example, the camera’s focus setting at the time of the shooting and the device used, regardless of subsequent modifications like applying a defocus effect.
- The entire history of the content is compiled and documented from the moment it was first created through every subsequent attempt to modify it.
- The metadata is associated with the content using two possible methods: a strict linkage that makes the data inseparable from the content or a more flexible linkage that permits specific alterations or adjustments.
Drawbacks to the C2PA Approach
Many social platforms are planning to integrate C2PA into their AI content workflow to improve content tracking efforts. Google intends to implement C2PA in its search results and advertising systems, and it is also considering integration into YouTube.
However, experts highlight a significant drawback: implementing this approach would necessitate overhauling social network algorithms, which is both difficult and costly. Where will the funding come from? Users themselves may cover the expenses through a payment mechanism granting access to verified content (at least during the initial phase.)
Another significant layer of issues involves security and privacy. How should geolocation data be handled if we are discussing photos posted by regular users who often take them at home? More generally, how can all this additional file information be safeguarded? After all, even the camera angle is a form of creative expression protected by copyright and cannot be freely disclosed.
Ensuring a uniform approach to verification is also crucial. If every social network and service employs its own method, confusion may arise — and C2PA is not the only standard aimed at determining authenticity. For instance, YouTube has its own approach: it marks AI-generated videos with an “Altered or Synthetic Content” label. However, at the moment, users can activate this feature at their own discretion.
Despite the lack of a unified approach across different social networks, the measures currently in place can certainly help reduce the volume of fraudulent digital content. It would also be beneficial to establish a mechanism for user interaction: upon receiving a complaint about specific files, verify whether they contain any special modifications.
Can Metadata Be Faked?
At present, the C2PA approach is effective, but it needs ideal conditions to function properly. It performs well in a model scenario with honest users, but if someone wants to, for example, alter the name of an account in the operating system, doing so would not be difficult. Even the simple act of taking a screenshot of the original image will alter the data related to its creator once published. Consequently, under combat conditions, the current version of this approach may not be highly effective.
C2PA technology, integrated with cloud solutions, can help combat inexperienced scammers or track copyrighted content. Still, many experts agree that at this stage of development, it will not be able to provide complete protection against deepfakes.
In addition, there are other drawbacks to C2PA technology:
- Limited device support. At present, only a few Sony and Leica camera models support the C2PA standard. It remains unclear whether Apple and Google will implement C2PA features in iPhones and Android devices.
- Software compatibility issues. Not all image editors currently support C2PA. For example, GIMP and Affinity Photo have yet to implement this feature.
- Lack of widespread adoption. Most major online platforms have not begun displaying C2PA data.
- Potential privacy issues. Implementing such a system may raise user concerns.
- Need for industry cooperation. The effectiveness of the C2PA approach relies on the willingness of key content processing, publishing and distribution companies to collaborate.
However, all these shortcomings can be eliminated if a large number of participants, primarily social networks and messengers, join the convention and the technologies themselves are improved.
C2PA’s efforts can only slightly ease the burden on individuals whose faces have been digitally altered into inappropriate images. To combat current types of deepfakes, more modern solutions are needed. Today, the word is in the hands of voice and video communication providers, and among them, messengers and social networks occupy the lion’s share. It is in their hands that the protection of the future peace of mind of users lies.