How to sign AI content with Adobe Content Credentials and verify others with the SynthID Detector. A practical 2026 guide for marketers.

Anastasiia Kyslenko · Digital Marketing Specialist · 6+ years, 120+ clients

From 2 August 2026, EU AI Act Article 50 makes marking synthetic content a legal requirement for anyone displaying AI-generated material to an EU audience โ€” fines up to €15 million or 3% of global annual turnover. The good news: if you generate images through Adobe Firefly, ChatGPT, or Microsoft Designer, those files are already signed automatically via the C2PA standard. This article explains what C2PA and SynthID are, which tools mark content without any action from you, where the gaps are and how to close them, and how to verify whether someone else’s content is AI-generated. For the broader legal context, see the AI law overview for businesses.

I generate AI content for clients daily — banners, voice-overs, ad creatives. Here is what is already auto-signed, what you must mark manually, and what becomes mandatory in August 2026 for anyone reaching an EU audience.

What AI watermarking is and why it matters for marketers

AI watermarking is a mechanism that lets you prove or verify the origin of content. There are two levels: a visible label (such as a “AI-generated” badge on TikTok) and an invisible signature embedded in the file’s metadata or pixel data (a C2PA manifest or a SynthID watermark). For a marketer, the difference matters: a visible label can be removed with a screenshot; an invisible one is far harder to strip.

A C2PA manifest (Coalition for Content Provenance and Authenticity) works like a nutritional label for content. Just as food packaging lists ingredients, a C2PA manifest records who created the file, which tool was used, when, and whether it was modified after creation. The data is embedded directly in the file (JPEG, PNG, MP4) and signed cryptographically. Full standard documentation at c2pa.org. The C2PA Steering Committee includes Adobe, Microsoft, Google, BBC, Intel, Sony, Truepic, and OpenAI. Cloudflare supports C2PA at the CDN level. The Pixel 10 is C2PA conformant.

SynthID (Google DeepMind) is an invisible watermark of a different kind: it is not stored in metadata but embedded directly in the pixels of an image, the frames of a video, the audio signal, or the tokens of text. Removing it without visibly degrading the file is practically impossible.

Why 2026 is the entry point and not a “wait and see” moment: EU AI Act Article 50 takes effect on 02.08.2026, major platforms are already auto-marking (TikTok in 2024-2025, Meta since February 2025), and C2PA is now supported across most mainstream AI generation tools. The legal details of the EU AI Act are covered in the AI law overview for businesses.

EU AI Act Article 50 — what is mandatory from August 2026

2 August 2026 is the date Article 50 of the EU AI Act enters force for AI content marking. The article requires a multi-layered approach: signed metadata (C2PA-level) plus invisible watermarks plus fingerprinting for deepfake video and audio. Penalties reach up to €15 million or 3% of global annual turnover. For a freelancer the absolute figure is lower, but the rule is identical.

What Article 50 specifically requires:

  • Providers (those who develop AI systems for content generation) must ensure technical marking of output — a machine-readable signature.
  • Deployers (those who use AI tools in their business — marketers and brands) must disclose to the audience that content is AI-generated.
  • Deepfake video and audio carry a separate, stricter requirement: explicit labelling visible to the end viewer.

Exceptions — three categories where requirements are reduced:

  • Artistic and satirical content — reduced disclosure form (not cancelled, but simplified).
  • Assistive editing — AI assistance in text (autocorrect, suggestions) is not subject to the same requirements as full AI generation.
  • Editorial content with human oversight — if a human editor has substantially reworked AI-generated text, the obligation is reduced. Where “substantially” begins is not yet codified.

Extraterritorial reach. The EU AI Act applies to non-EU producers under Article 2(1)(c): if AI content you produce or distribute reaches an audience in any EU member state — Germany, France, Poland, any other — the Act applies to you as deployer. For agencies, freelancers, and brands running campaigns that target EU markets, this is a direct obligation from August 2026, not a theoretical risk.

The EU Code of Practice on AI-generated content is still being finalised — current status at digital-strategy.ec.europa.eu. Official text of Article 50 — artificialintelligenceact.eu/article/50.

C2PA and Content Credentials — what is already auto-signed in your stack

If you use Adobe Firefly, ChatGPT (DALL-E 3), Microsoft Designer, or Photoshop Generative Fill, your files already receive a C2PA signature automatically — no extra steps required on your side. There is one critical limitation: Instagram, Facebook, and Twitter/X strip C2PA metadata when a file is uploaded to the platform.

What signs automatically:

  • Adobe Firefly / Express — 100% of AI-generated files are signed via Content Credentials. Generate an image in Firefly and you receive a file with a cryptographic Adobe signature.
  • Photoshop Generative Fill — adds a C2PA assertion to the existing file whenever AI fill is used.
  • ChatGPT (DALL-E 3) — OpenAI has added C2PA manifest data to DALL-E 3 image metadata since February 2024 (both ChatGPT UI and API). The visible “CR-pin” appears on the image itself only when generated through the ChatGPT UI — API output files carry the metadata only.
  • Microsoft Designer / Bing Image Creator / Azure OpenAI — automatic C2PA signing.

What does not sign by default: Midjourney, Stable Diffusion, Flux, AUTOMATIC1111, and other open-source or locally-run generators produce no C2PA by default. If you or your client uses these tools, manual signing is required (covered in the next section).

What happens on platforms. TikTok auto-reads C2PA manifests and labels matching content (the first major platform with mass automatic C2PA-based labeling). Meta has auto-marked AI ads created with its own Meta GenAI tools since February 2025; for organic posts, self-declaration is required. YouTube requires mandatory self-declaration for realistically altered or synthetic content.

The critical limitation most people miss: Instagram, Facebook, and Twitter/X strip C2PA metadata on upload. A file you save from your own Instagram post no longer has Credentials attached to it. Always keep the original signed file separately — before uploading to any of these platforms. Full details on the Content Authenticity standard — contentauthenticity.org.

How to sign your content — a practical workflow

The workflow depends on which tool you use to generate content. For Firefly and DALL-E, nothing is needed — but it is worth verifying. For Midjourney, Stable Diffusion, and Flux, manual signing is required. For high-volume generation, the API route makes sense. In every case: keep the original file with Credentials before you publish to Instagram or Facebook.

  1. If you generate in Firefly, DALL-E 3, or Microsoft Designer — it is already signed. Verify: upload the file to contentcredentials.org/verify — free, instant, shows the manifest with date, tool, and signature.
  2. If you generate in Midjourney, Stable Diffusion, or Flux — sign manually. Use the Adobe Content Authenticity (beta) app โ€” upload your AI file, attach your creator identity, and stamp a C2PA manifest. For pipelines at scale, use the open-source c2patool CLI. Note: contentcredentials.org/verify is an inspector for existing Credentials, not a tool for adding them.
  3. For high-volume generation — c2pa-node or c2pa-python. Both libraries are open-source and allow programmatic batch signing. Useful when you are generating hundreds of ad banners for a client via API.
  4. Always save the ORIGINAL file with Credentials separately. Do not rely on the Instagram version to retain the signature — it will not. A folder of originals archived before upload is your legal protection and proof of authorship if it is ever needed.
  5. Add a text disclosure to the post caption. For platforms that strip metadata: a short line like “Image created with AI” or “AI-generated” is sufficient. No need to apologise or write a paragraph — one line closes the gap left by platform stripping.

Want all the steps, tools, and decision tree in one place — there is a PDF below with everything structured.

SynthID — how Google marks its content

SynthID is Google DeepMind’s invisible watermark, embedded at the file level rather than in metadata. For a marketer this means: any content generated through Gemini, Imagen, Veo, or Lyria is already signed automatically. Nothing to configure. The limitation: the SynthID Detector is on a closed waitlist and is not publicly available.

Where SynthID operates:

  • Imagen (images) — the watermark survives crop, JPEG compression, filters, and resize. Even if someone downloads your Imagen image and processes it in Photoshop, the signature persists.
  • Veo (video) — the watermark is embedded frame by frame.
  • Lyria + NotebookLM Audio Overview (audio) — a psychoacoustic watermark. Inaudible to the human ear, it survives MP3 compression and even tempo changes or background noise layering.
  • Gemini (text) — a token-level watermark: the algorithm subtly adjusts word-selection probabilities so that the output carries a statistically detectable signature. Invisible to the reader.

SynthID Detector — an important clarification. The detector exists but is in closed early access — a waitlist for journalists and researchers. There is no public, free access. You cannot independently verify someone else’s content against SynthID unless Google has granted you access. More detail on SynthID — deepmind.google/models/synthid and the Google blog.

What this means practically: content you generate through Gemini or Imagen is signed at a level invisible to users but readable by verification systems. It is your protection against accusations of fabrication — and your proof of authorship when needed. For how AI search (including Google AI Overviews) affects content visibility, see the article on Google Search Live and AI answers.

Audio watermarking — what happens with AI voice

AI voice is the most sensitive area from a deepfake-risk and regulatory standpoint. ElevenLabs and Google embed invisible watermarks in audio automatically. But no public detector covers all TTS models — the absence of a signal does not mean “human voice”.

ElevenLabs. Embeds an invisible watermark in all AI voices by default — enabling the file to be traced back to a specific account. There is a public ElevenLabs AI Audio Classifier — free to use, but only for content generated by ElevenLabs. Regulatory attention to AI audio is intensifying through 2025-2026 (regulator inquiries to providers about watermarking practice are now routine). Details on safety policy — elevenlabs.io/safety.

SynthID Audio (Google). Psychoacoustic watermark for Lyria and NotebookLM Audio Overview. Survives MP3 compression, tempo changes, and layered background noise. Automatic for all content from these products.

AudioSeal (Meta, open-source). A library for manually adding a watermark to already-generated audio — useful if you are using a TTS model that has no built-in signing.

Resemble Detect. API for AI audio detection. Accuracy of 85–95% on standard content — varies by model and recording quality.

The key limitation for all audio detectors: open-source TTS models without a built-in watermark are much harder to detect. If someone generates voice with a local model and no signing, most public detectors will not return a confident signal. The absence of a label does not mean “human voice”.

Got a specific case with AI voice or audio in ads? Message me on the bot: @adastra_assistant_bot

How to verify others — a workflow for marketers

There are two situations where you need to verify someone else’s content: competitive intelligence (is a competitor using AI in their ad creatives?) and client protection (did a supplier deliver an AI-generated photo instead of a real one?). The right tool depends on content type.

Images:

  • contentcredentials.org/verify — free. Upload the file, see the C2PA manifest: who signed it, with which tool, when. If a manifest is present, origin is confirmed. If not — either the file was never signed, or the metadata was stripped when it was uploaded to a platform.
  • hivedetect.ai — ML detection without metadata. Analyses image artefacts and returns a confidence score: the probability that the content is AI-generated. Works even when no C2PA signature is present.

Audio:

  • ElevenLabs AI Audio Classifier — free public detector, but only for ElevenLabs-generated content.
  • Resemble Detect — API with broader model coverage, solid accuracy on standard content, weaker on open-source TTS without embedded watermark.

Video:

  • hivedetect.ai — video module analyses inter-frame inconsistencies and artefacts characteristic of AI generation.
  • SynthID Detector — waitlist only, for Google-generated content (Veo) only. No public access currently.

Competitor ads: Meta Ads Library displays “Made with AI” labels on some ads where Meta has automatically identified AI generation. Not 100% coverage, but a useful signal worth checking.

Critical caveat: no AI signal does not equal human content. It means only that the detector found no watermark or recognisable artefacts. Open-source models without signing and well-prepared deepfakes can evade detection. For legally significant conclusions, one detector is not sufficient. For schema markup and structured data for AI search, see the article on schema markup for voice search.

5 practical steps to take this week

Do not wait until August 2026. Most of these steps take under 10 minutes and give you a clean compliance position, protection against claims, and a clear process to communicate to clients. For AI content in e-commerce specifically, see the article on Shopify, ChatGPT and Google Shopping AI.

  1. Audit your AI stack. List every AI tool you use to generate content for clients. Mark which ones sign automatically: Firefly, DALL-E 3, Microsoft Designer — yes. Midjourney, Stable Diffusion, Flux, local models — no. That list is your gap map.
  2. Set up original-file archiving with Credentials. Before uploading anything to Instagram or Facebook, save the original file to a dedicated folder by client and date — something like client-name/ai-assets/YYYY-MM-DD/. Two minutes of setup now closes the “where is the proof” question for years.
  3. Verify one of your existing files right now. Take any image generated in Firefly or ChatGPT and upload it to contentcredentials.org/verify. If you see a manifest — the pipeline is working. If not — either you uploaded the post-Instagram version, or the tool does not sign.
  4. Add a text disclosure to AI posts. For platforms that strip metadata: a short line in the caption — “Image created with AI” or “AI-generated” — is sufficient. One line is enough; no justification required.
  5. Build a process for EU campaigns before August 2026. If you or your client targets any EU country: confirm that the AI tools in your stack sign output (Firefly, DALL-E 3 — yes; Midjourney — no, manual step needed). For campaigns with no EU reach, a text declaration in captions is the minimum viable step.

Reference table: tools, auto-marking, and where the gaps are

Below is an overview of common AI tools by content type: whether they sign automatically, and what to do when they do not. The table covers the most widely used marketing stack.

Tool Content type Auto-marking Method If not auto-signed
Adobe Firefly / Express Image ✓ Yes C2PA Content Credentials
ChatGPT / DALL-E 3 Image ✓ Yes (since Feb 2024) C2PA + (CR) badge
Microsoft Designer Image ✓ Yes C2PA
Midjourney Image ✗ No Sign manually via contentcredentials.org or c2pa-node
Stable Diffusion / Flux Image ✗ No c2pa-python or manual signing
Google Imagen / Veo Image / Video ✓ Yes SynthID
Gemini (text) Text ✓ Yes SynthID (token-level)
ElevenLabs Audio ✓ Yes Invisible audio watermark
Other TTS (open-source) Audio ✗ Varies AudioSeal (Meta open-source) for manual signing

Disclaimer: this article is a practical overview for marketers. The EU AI Act Code of Practice is still being finalised at the time of publication, and technical standards can change. Before making decisions with legal consequences, verify the current status via contentauthenticity.org and official EU sources. Legal context — AI law overview for businesses.

Frequently asked questions about AI content marking

Yes, under the extraterritorial principle in EU AI Act Article 2(1)(c). If AI content you produce, distribute, or use in advertising reaches an audience in any EU member state, the Act applies to you as deployer — regardless of where your business is registered. This covers a US agency running Facebook ads into Germany, a UK brand with Google Ads targeting France, and a freelancer anywhere managing campaigns with EU geo-targeting. From 2 August 2026 this is a direct obligation, not a theoretical exposure.

C2PA metadata is gone — a screenshot does not preserve the EXIF or XMP fields where the manifest is stored. The image becomes clean, with no provenance signature attached. SynthID watermarks (embedded in pixels rather than metadata) can theoretically survive a screenshot, but detection accuracy drops significantly. The practical rule: a screenshot is a way to accidentally or deliberately erase Credentials. Always save the original file directly from the generation platform, not a screen capture.

For C2PA-signed video — contentcredentials.org/verify supports video file uploads. For ML detection without metadata — hivedetect.ai has a video module that analyses inter-frame artefacts. SynthID Detector from Google (for Veo content) is a closed waitlist with no public access at the time of writing. For the audio track inside a video — ElevenLabs Audio Classifier or Resemble Detect. No single detector covers all models; running two independent checks is more reliable than one.

Under the EU AI Act, text is an open question — particularly when it has been substantially edited by a human. The “assistive editing” and “editorial content with human oversight” exceptions are directly relevant here, but the threshold for “substantially” has not yet been codified. The practical framing: if the output is 80%+ ChatGPT with only light editing from you, adding a discreet disclosure is the safer position. If you wrote 70% yourself and used ChatGPT for specific paragraphs, the answer depends on context and whether the content targets an EU audience.

No. Meta auto-marks only content generated through its own Meta GenAI tools (Meta AI, Imagine with Meta). If you upload a Midjourney or Stable Diffusion image to Instagram or Facebook, no automatic labelling happens. Midjourney adds no C2PA, and Meta does not auto-detect it. The disclosure responsibility is yours: use the manual label option when uploading, or add a text declaration in the caption.

The C2PA standard is free and open. Verification via contentcredentials.org/verify is free. The c2pa-node and c2pa-python libraries are open-source. If you generate content in Firefly, DALL-E 3, or Microsoft Designer, signing is already included in the cost of your subscription to those tools — no additional charge. Enterprise-level solutions exist (Truepic for large-scale signing linked to business identity), but for a freelance marketer or SMB brand the entire baseline stack costs nothing.

AI Content Provenance Toolkit — all in one PDF

PDF “AI Content Provenance Toolkit” — 5 sections: content type × platform × auto-marking matrix; step-by-step image signing workflow; how to verify others’ AI content; EU AI Act Article 50 decision tree; 8-point pre-publish checklist. Concrete, no padding.

Download in Telegram →