7 compliance blocks: personal data, ChatGPT prompts, deepfake ads, chatbots, email automation. EU AI Act August 2026 β what to set up now.
This is the operational companion to the AI regulation overview article. Here are 30 practical points I personally run through before launching AI in any client project. Most take under 10 minutes and cost nothing. Some are critical before 2 August 2026, when EU AI Act Article 50 comes into force. The rest is baseline hygiene that applies regardless of where you or your clients are based.
I am not a lawyer. I am a marketing freelancer with 6+ years and clients across UA, UK, and DE. This checklist is what I personally run before launching AI in any client project. For contracts and legal risk management, see a lawyer. For operational hygiene, start here.
If you have not read the legal context in the overview article, I recommend starting there β it covers what the EU AI Act is, who it applies to regardless of where you are based, and why the extraterritorial scope makes it relevant to any marketer running campaigns to EU audiences. For context on how many businesses are actually using AI in 2026, see the AI adoption statistics article.
Before You Start β Define Your Scope
Not all 30 points carry equal urgency for every business. Before running through the full list, answer three questions that determine where your actual risk sits: Do you have EU or UK clients, or do your ads reach EU/UK audiences? How much personal data do you process monthly? Are you using AI in chatbots, email automation, or ad creative generation?
Three questions to find your level:
- Do I target ads to β or receive organic traffic from β EU or UK audiences?
- Do I process email addresses, phone numbers, or behavioural data for more than 500 people per month?
- Do I use AI in chatbots, email automation, or ad creative production?
Non-EU/UK audience, under 5 clients, no chatbots, no AI in paid ads. Points 1, 2, 7, 11, 14, 16, 21, 25 β do these today.
Any EU/UK contacts, email automation, chatbots, or AI-generated ad creatives. Basic + Blocks 2β5.
Commercial AI for EU/UK clients, deepfake risk, subcontractors, audit-ready stack. All blocks + timeline.
For the legal context behind each level β see the AI regulation overview. For data on how businesses are actually using AI β see the AI adoption statistics article.
Content Production β Points 1β5
These five points apply to every marketer regardless of scale. Content production with AI has the lowest barrier to entry β and the most grey zones: copyright, image rights, and disclosure obligations for EU audiences. For a detailed breakdown of AI content marking standards, see the article on C2PA and SynthID.
-
Does AI-generated content go through meaningful human editing before publication? The US Copyright Office Part 2 guidance (February 2025) confirmed that pure AI output is not copyrightable β sufficient human creative selection and arrangement is required. UK law (CDPA s.9(3)) grants 50-year protection to computer-generated works, but only where a human author made the arrangements. Without genuine human input, you do not own the result.
-
Do you have written consent from the real person if an AI image reproduces a recognisable face? Image rights apply across jurisdictions regardless of whether an image is AI-generated or photographed. In the EU, this is grounded in data protection and personality rights. In the UK, it derives from privacy and misuse of private information frameworks. “It’s AI, not a real photo” is not a defence β the likeness is what matters.
-
Do you know which AI tools in your stack automatically embed a C2PA (Content Credentials) signature in the output file? Adobe Firefly and DALL-E 3 sign automatically β Midjourney and Stable Diffusion do not. Not knowing the state of your own stack means you cannot confirm provenance, cannot close a compliance gap proactively, and cannot answer a client who asks. Detailed tool comparison: AI content marking article.
-
Are you ready to disclose AI-generated content to EU audiences from 2 August 2026? EU AI Act Article 50 obliges deployers β anyone using AI in a commercial context β to inform audiences when content is AI-generated. If your ads or content reaches any EU country, this applies to you. Penalties reach up to β¬15 million or 3% of global annual turnover, whichever is higher.
-
Does your prompt reproduce a specific protected work verbatim or near-verbatim? Style is not protected by copyright β specific text, melody, or design is. “Write in the style of X” is generally fine. “Reproduce the opening paragraph from book X” is infringement. The distinction is between inspiration and reproduction, and it is a line worth knowing before a client brief sends you across it.
Advertising β Points 6β10
Ad platforms have moved faster than regulators on several of these requirements. Meta has required disclosure of AI-generated elements in ad creatives since February 2025 β auto-marking from Meta’s own GenAI tools β and extended the disclosure requirement to all AI ad creatives in March 2026. TikTok has a separate toggle. Google has its own requirements for synthetic media in election and realistic AI-human ads. These are not theoretical risks: a flagged creative means a blocked account or stopped campaign.
-
Do you mark AI-generated elements in Meta Ads creatives? Meta requires “Made with AI” / “AI Info” labelling for realistically altered or fully AI-generated material. Meta auto-marks creatives from its own GenAI tools (since February 2025) and continues to expand this to broader AI ad creatives β check current policy before each campaign. Non-compliance risks ad rejection or account restriction.
-
Does your advertising contain deepfakes of real people without written consent? Deepfakes of real people without consent are explicitly prohibited by Meta, Google, and TikTok policy β and carry legal exposure under personality and image rights frameworks in the EU and UK. Account suspension and civil claims are both live risks, not hypotheticals.
-
For election ads or realistic AI-human creatives in Google Ads β are you adding the required disclosure? Google requires clear disclosure when ads contain synthetic images or voices of real people, particularly in sensitive categories. Google also requires IPTC DigitalSourceType TrainedAlgorithmicMedia metadata for AI-generated images. Missing this step can result in campaign suspension during a live flight.
-
Have you checked whether your stock image licence permits AI editing for commercial use? A number of standard stock licences β including some Getty and iStock tiers β do not permit AI transformation of the source file. Using an AI-edited version of a licensed image in a commercial campaign can constitute a licence breach, which is a separate issue from platform policy.
-
On TikTok, are you enabling the “AI-generated content” toggle for AI video or AI audio? TikTok auto-reads C2PA from some files and marks them automatically β but for content without C2PA metadata (Midjourney, ElevenLabs, and most audio tools), the responsibility for flagging sits with you. Failing to disclose leads to video removal; repeated violations affect account standing.
Client Data and AI Providers β Points 11β15
This is the most common blind spot for marketing freelancers. When you paste an email list or CRM data into a ChatGPT prompt, you are transferring personal data to a third party. Without a signed DPA (Data Processing Agreement β a contract defining how the processor handles data on your behalf), that transfer has no legal basis under GDPR, even if your intent was simply “write a personalised email”. Current guidance from the European Data Protection Board: edpb.europa.eu.
-
Do you have a signed DPA with OpenAI? OpenAI’s Data Processing Addendum is available self-serve for API, Team, and Business plan users. Free and Pro accounts do not have DPA coverage β which means they are non-compliant for processing EU personal data. This is a hard line, not a grey area.
-
Do you have a signed DPA with Anthropic? Anthropic’s DPA is available for the Claude Team plan and above through Commercial Terms. Free and Pro plans do not include a DPA β they don’t cover GDPR subprocessor requirements. If you use Claude.ai free for tasks involving client personal data, you are operating without a legal transfer basis.
-
Do you have the Google Cloud DPA active for Workspace or Vertex AI usage? The Google Cloud DPA must be activated through Admin Console under Legal and compliance β it is not automatic. Without this step, processing EU data through Google AI services formally lacks a legal subprocessor basis even if you have a standard Google Workspace subscription.
-
Do you anonymise or pseudonymise client personal data before passing it to an LLM? GDPR’s data minimisation principle (Article 5) requires that only data necessary for the task is transferred. For most content tasks, “client John, B2B segment, product category X” does the job β full name, phone number, and purchase history rarely add value to the prompt but substantially increase your risk exposure.
-
Is your subprocessor chain documented in your ROPA (Records of Processing Activities)? If you process data belonging to EU data subjects, you are required to maintain Records of Processing Activities and list all subprocessors β OpenAI, Anthropic, Google, ElevenLabs, and any other AI tool that touches personal data. Enterprise clients request this documentation with increasing regularity.
Not sure whether your current stack has all DPAs in place and your ROPA up to date? I run AI compliance audits in one day β message the bot: @adastra_assistant_bot
Chatbots and Automation β Points 16β20
A chatbot is an AI system within the meaning of the EU AI Act if it interacts with people in natural conversation. From 2 August 2026, the requirement to disclose AI interaction at first contact becomes mandatory for EU audiences. Separately, GDPR Article 22 governs automated decisions that produce legal or similarly significant effects on individuals β and this applies to AI-based lead scoring if it determines who gets served, offered a discount, or deprioritised.
-
Does your chatbot inform users at first contact that they are talking to an AI, not a human? EU AI Act Article 50 requires deployers of conversational AI systems to notify users β even if the bot sounds natural and human-like, the first message or a persistent footer must make the AI nature clear. This is mandatory from 2 August 2026 for EU-facing deployments.
-
Is there an active opt-in (not a pre-ticked box) before collecting email or phone through a bot form? GDPR requires freely given, specific, informed, and unambiguous consent. A pre-ticked checkbox or “by continuing this conversation you agree” formulation is not valid consent under GDPR. This applies to any EU-resident user regardless of where your business is registered.
-
Is every AI tool in your stack named in your privacy policy β with provider name and processing purpose? A privacy policy is a legal document, not a formality. If ChatGPT or Anthropic is not listed as a subprocessor with a described purpose, you are processing data without a transparent legal basis β even if you have a DPA signed. Both documents are required.
-
If AI automatically makes decisions with legal or significant effect on a person (rejection, pricing, access restriction) β is there a human review procedure? GDPR Article 22 applies to decisions with legal or similarly significant effect (credit, employment, insurance) β typical marketing lead scoring usually does not trigger it if scoring is recommendatory and a human approves. But if AI auto-rejects without human review, add a human review path to be on the safe side.
-
Do you store the minimum necessary data in your CRM and delete old contacts on a defined retention schedule? GDPR’s storage limitation principle (Article 5) prohibits keeping personal data longer than necessary for the original purpose. A CRM full of inactive contacts from 2020 with no documented retention basis is a compliance exposure β even if you never email them. Set a schedule and document it.
Email, AI Copyright, and Automation β Points 21β25
Email marketing is where personal data most commonly enters an LLM without the marketer realising what they have done. Pasting a subscriber list into a prompt to personalise a campaign sequence is a data transfer to a third-party subprocessor β without a DPA and without the data subjects’ awareness. For a broader look at AI automation for small businesses, see the AI automation article. For AI in e-commerce email flows, see the Shopify and ChatGPT article.
-
Do you have explicit consent for email marketing β not “they agreed to the Terms of Service at some point”? GDPR, CAN-SPAM, and CASL each require clear consent specifically for marketing communications. General ToS acceptance is not a valid legal basis for sending marketing emails. If your consent records cannot demonstrate opt-in to marketing, your email list has a legal basis problem β regardless of open rates.
-
Is every AI tool in your email workflow (generation, personalisation, subject line testing) documented in your privacy policy? Klaviyo AI, Mailchimp Content Optimizer, and similar tools are subprocessors in your email chain. Each must be named in your privacy policy with a described purpose. Adding a new AI feature to an existing platform is a change to your subprocessor list β it requires a privacy policy update, not just a click to enable.
-
Have you reviewed the GDPR compliance position of your email provider β Mailchimp, Klaviyo, Brevo β before enabling new AI features? Providers add AI features regularly: predictive sending, AI subject lines, send-time optimisation. Each new capability may add new subprocessors or change data transfer terms. Compliance review should happen at each significant platform update, not just at onboarding.
-
Are you passing real client personal data β names, emails, purchase history β into a ChatGPT or Claude prompt without anonymisation when generating emails? Even when the task is simply “write a personalised follow-up email,” a real name and real purchase data in the prompt constitute a personal data transfer to a subprocessor. Without a DPA and without a minimisation discipline, this is a GDPR breach on data minimisation grounds β regardless of how routine the task feels.
-
Is there a working unsubscribe link in every automated email β one that processes the request without manual intervention? A functional unsubscribe mechanism is a legal requirement under CAN-SPAM, CASL, and GDPR. “Email us to opt out” is not compliant β unsubscribes must be processed automatically and honoured within 10 business days at most. This is one of the easiest compliance failures to audit and one of the costliest to defend.
IP, Transparency, and Vendor Due Diligence β Points 26β30
This block covers the things most freelancers and small agencies ignore until the first dispute: client contracts, provider terms, and subcontractor vetting. If you subcontract AI work to another freelancer or agency, their compliance gaps become your liability β you remain the data controller and the party with the client relationship. For detailed guidance on AI content attribution and marking standards, see the C2PA and SynthID article.
-
Do your client contracts include a clause disclosing the use of AI tools in delivering the work? Without this clause, a client can challenge ownership of deliverables, dispute the basis of your fee, or raise copyright questions β particularly if they later discover the work involved AI generation. Disclosure protects both parties and removes ambiguity about who owns what. Adding a clear AI-use clause is a one-time contract update, not an ongoing cost.
-
Does your commercial use of AI outputs comply with the provider’s Terms of Service β for example, not training competing models on OpenAI output, and not using commercial voice clones under a non-commercial licence? Most LLM Terms of Service prohibit using outputs to train competing models. Several voice synthesis platforms carry non-commercial restrictions on specific licence tiers. Violations can result in account termination and licence claims. Reading the ToS once per provider per year is not excessive β platforms update them.
-
If registering IP in the US β are you disclosing AI-generated components in the copyright application? The US Copyright Office Part 2 guidance (February 2025) requires disclosure of AI-generated elements when registering copyright. Omitting this can invalidate the registration. In the UK, CDPA s.9(3) provides 50-year protection for computer-generated works. In both jurisdictions, transparency at registration is cleaner than a dispute after the fact.
-
Have you reviewed an AI provider’s compliance posture before adding them to a client stack β SOC 2, ISO 27001, subprocessors list, data residency? When an AI tool processes your clients’ data, you are responsible for vendor selection and must demonstrate due diligence. Enterprise clients ask for this documentation as part of vendor onboarding. Being able to point to a reviewed compliance posture is the difference between a smooth procurement conversation and a lost contract.
-
When subcontracting AI work to freelancers β do you verify that they have their own DPAs in place and understand the compliance requirements? Under GDPR, if you engage a subcontractor who transfers client personal data to an LLM without a DPA, you bear liability as the data controller β not them. Compliance in your supply chain is your responsibility to verify. A short pre-engagement checklist (three to five questions) closes this gap before it opens.
Quick Summary β 30 Points by Priority
You do not need to do everything at once. Here is the breakdown by urgency β from “do this today” to “review quarterly.”
- Point 1 β Human editing of AI content
- Point 2 β Consent for AI images with faces
- Point 7 β No deepfakes without consent
- Point 11 β DPA with OpenAI
- Point 14 β Anonymise before LLM prompts
- Point 16 β Chatbot discloses it is AI
- Point 21 β Explicit consent for email marketing
- Point 25 β Working unsubscribe in every email
- Point 4 β AI disclosure for EU audiences
- Point 6 β AI labelling in Meta Ads
- Point 16 β Chatbot AI disclosure
- Point 18 β AI tools in privacy policy
- Point 26 β AI clause in client contracts
- Points 11β15 β DPAs and subprocessors
- Point 23 β Email provider GDPR review
- Point 29 β Vendor compliance posture
- Point 30 β Subcontractor vetting
| # | Point | Block | Priority |
|---|---|---|---|
| 1 | Human editing of AI content | Content | Today |
| 2 | Consent for AI images with faces | Content | Today |
| 3 | C2PA stack audit | Content | This week |
| 4 | EU disclosure from 02.08.2026 | Content | Before August 2026 |
| 5 | Check prompts for reproduction | Content | Today |
| 6 | AI labelling in Meta Ads | Advertising | Before August 2026 |
| 7 | No deepfakes without consent | Advertising | Today |
| 8 | Disclosure for AI-human Google Ads | Advertising | This week |
| 9 | Stock licence permits AI editing | Advertising | This week |
| 10 | TikTok AI-generated toggle | Advertising | This week |
| 11 | DPA with OpenAI | Data + AI | Today |
| 12 | DPA with Anthropic | Data + AI | This week |
| 13 | Google Cloud DPA active | Data + AI | This week |
| 14 | Anonymise before LLM | Data + AI | Today |
| 15 | Subprocessors in ROPA | Data + AI | This month |
| 16 | Chatbot discloses AI at first contact | Chatbots | Before August 2026 |
| 17 | Active opt-in in bot forms | Chatbots | This week |
| 18 | AI tools named in privacy policy | Chatbots | Before August 2026 |
| 19 | Human review for AI lead scoring | Chatbots | This month |
| 20 | Data retention schedule in CRM | Chatbots | This month |
| 21 | Explicit consent for email | Today | |
| 22 | AI email workflow in privacy policy | This month | |
| 23 | Email provider GDPR compliance review | Quarterly | |
| 24 | Anonymise in email prompts | Today | |
| 25 | Working unsubscribe in every email | Today | |
| 26 | AI clause in client contracts | IP + Vendors | Before August 2026 |
| 27 | Provider ToS compliance | IP + Vendors | This week |
| 28 | AI disclosure in copyright registration | IP + Vendors | At registration |
| 29 | Vendor SOC 2 / ISO 27001 review | IP + Vendors | Quarterly |
| 30 | Subcontractor compliance vetting | IP + Vendors | At engagement |
Frequently Asked Questions
Eight points cover the highest-probability risks regardless of geography: human editing of AI content (1), written consent for AI images with real faces (2), no deepfakes without consent (7), a DPA with OpenAI if you use ChatGPT with client data (11), anonymising data before LLM prompts (14), chatbot disclosure if you run one (16), explicit consent for email lists (21), and a working unsubscribe link (25). That is two hours of setup, one afternoon if you do it properly. Everything else on the list extends protection for more complex or EU-facing work.
You can deprioritise the points that directly reference EU AI Act Article 50 and GDPR β but only if you are genuinely confident none of your content or advertising reaches EU audiences. Points 1, 2, 5, 7, 14, 21, 24, and 25 remain relevant regardless of geography: they cover copyright, image rights, data minimisation, and email law that applies in most major jurisdictions. One practical caveat: Meta and Google Ads both offer automatic audience expansion settings that can serve ads to EU users even when your targeting is not explicitly set to EU territories. Check before assuming you have no EU exposure.
Most of this list is free β it is a change in working process, not a purchase. DPAs with OpenAI, Anthropic, and Google are all self-serve and cost nothing to sign. Anonymising data before prompts is a habit, not a tool. Adding disclosure to a chatbot is a line of text. The real cost only arises if you need to upgrade from a free plan to a paid tier (OpenAI Team, Claude Team) to get DPA coverage β which is justified if you are running any volume of client data through these tools. Updating your privacy policy and client contracts may require a lawyer for the first pass, but it is a one-time investment, not a recurring line item.
Before 2 August 2026, the priority items are points 4, 6, 16, 18, and 26: AI disclosure for EU-facing content and ads, AI labelling in Meta Ads, chatbot disclosure that it is an AI, updating your privacy policy to name AI tools, and adding an AI-use clause to client contracts. After August, these requirements carry legal weight with penalties under EU AI Act Article 50 β up to β¬15 million or 3% of global turnover. One important caveat: the final Code of Practice under Article 50 was not published at the time of writing this article. Specific implementation details may be refined. Monitor the official Article 50 status for updates.
I use a short pre-engagement checklist: Do they have DPAs in place with the AI providers they use? Do they anonymise client personal data before putting it into prompts? Is their privacy policy updated to list AI tools? Do they know what EU AI Act Article 50 requires if they work on EU-facing projects? If someone responds with “that’s not really my area, I just write copy” β that is a concrete risk signal, not a minor gap. Asking these questions directly before engagement is normal professional due diligence, and a competent freelancer will answer without hesitation.
Want updates on the EU AI Act and the next article in the series? Join the Telegram channel @adastra_marketing_blog.
30-Point AI Marketing Compliance Checklist — PDF
PDF “30-Point AI Marketing Compliance Checklist for Freelancers & Small Businesses” — 6 sections: Scope & Levels, Providers + DPA, Client Data, Content + Disclosure, Subprocessors, Timeline + Priorities. Format: one point — one question — one reason. Print it out and check your stack in 30 minutes.
Download the Checklist in Telegram →Disclaimer. This checklist is operational hygiene, not legal advice. Laws and their interpretations change: the final Code of Practice under EU AI Act Article 50 had not been published at the time of writing. For contracts, claim management, and situations with legal consequences, consult a lawyer specialising in IT, IP, or data protection. Legal context and jurisdiction overview in the AI regulation article.
Comments
Discuss on Telegram