🎁 Free consultation. Book now. 🆕 New service: GEO optimization for AI search engines 📝 New blog post
SEO & GEO

AI Law for Ukrainian Businesses Selling into the EU and UK in 2026

Anastasiia Kyslenko · · 12 min read

Practical AI compliance map for Ukrainian businesses selling into the EU or UK in 2026: EU AI Act stages, GDPR for non-EU sellers, UK ICO guidance, ad disclosure rules, chatbot transparency.

Anastasiia Kyslenko · Digital marketing freelancer · 6+ years, 120+ clients, 5 markets (UA/EN/DE/PL/UK)

You operate from Ukraine, but your customers are in the EU or UK — and that matters more than your registration address. The EU AI Act applies to you based on where your output lands, not where your business is registered. GDPR applies to you the moment you actively target EU residents and process their data. Your Ukrainian status does not shield you from either regime. This article maps what is already in force, what is coming, and what five things you can do this month to close the most obvious gaps. AI is already used by most small businesses — the question is whether the compliance basics are in place when an EU or UK customer walks in.

I’m a marketer, not a lawyer. I implement AI for clients every day — chatbots, ad content, automations, CRM integrations — and I see exactly where Ukrainian businesses get caught out when EU or UK customers come in: DPAs not signed with AI providers, chatbots with no disclosure, AI-altered images of real people running in Meta campaigns. This article gives you the picture. For contracts, regulatory filings, and legal disputes, see a lawyer who specialises in IT and data protection.

You operate from Ukraine — but if your customers are in the EU or UK, EU rules apply to you

The most common blind spot I see in Ukrainian businesses entering EU or UK markets: “we’re not incorporated in the EU, so EU rules don’t apply to us.” That assumption is wrong, and it’s wrong in writing — specifically in GDPR Article 3(2) and EU AI Act Article 2.

Both laws use extraterritorial scope — meaning they apply based on where your customers are, not where your business is registered. Under EU AI Act Article 2, the law applies to you if the output (result) of your AI system is used in the EU, or if you supply AI systems or services to customers in the EU. It does not matter that your entity is Ukrainian.

GDPR Article 3(2) works the same way: it applies to non-EU controllers (that includes Ukrainian businesses) if they offer goods or services to EU individuals, or if they monitor the behaviour of EU individuals — for example, through retargeting campaigns or analytics tracking.

The practical test for “targeting” is not technical — it’s commercial. Ask yourself whether any of the following apply to your business:

  • Your website or shop is available in a European language (German, French, Polish, etc.) aimed at EU users
  • You accept EUR, GBP, or other EU/UK currencies
  • You offer delivery or services into EU or UK territory
  • You run paid ads (Google, Meta, TikTok) explicitly targeting EU or UK audiences
  • You have a EU or UK phone number, address, or country-specific domain (.de, .fr, .co.uk)
  • You use Google Analytics, Meta Pixel, or any other tracking tool on EU/UK visitors

If two or more of these apply, EU rules almost certainly reach you. Random EU traffic with no active targeting is a lower-risk grey zone — but the moment you run a paid campaign into Germany or the UK, the targeting test is met. Build the compliance stack accordingly.

EU AI Act timeline — what is already in force in April 2026

The EU AI Act is the world’s first comprehensive AI regulation. It entered into force in August 2024 and is rolling out in stages through 2027. Some obligations are already in effect. Others arrive in August 2026 — which is relevant to anyone running AI-generated content in EU/UK markets.

Here is the timeline per the official EU AI Act implementation schedule:

Date What applies Status
2 Feb 2025 Prohibitions on 8 categories of AI (social scoring, most biometric categorisation, manipulative systems) + AI literacy obligation for staff working with AI In force
2 Aug 2025 Obligations for GPAI (General Purpose AI) providers — companies that build and distribute models like GPT-4 In force
2 Aug 2026 High-risk AI systems (medical, financial, employment, critical infrastructure) + Article 50 synthetic content disclosure obligations Upcoming
2 Aug 2027 Extension period for GPAI systems that existed before August 2025 Future

The distinction between provider and deployer matters here. A provider is a company that builds and puts an AI product on the market — for example, developing a proprietary chatbot and selling access to it. A deployer is a company that uses an existing AI system in its own business — for example, using ChatGPT to write ad copy, or ManyChat for messaging automation. Most Ukrainian SMBs are deployers. Deployer obligations under the EU AI Act are significantly lighter than provider obligations, but they do exist — particularly around prohibited practices and transparency.

Fines under the EU AI Act are set high: up to €35 million or 7% of global annual turnover for violations of the prohibitions, up to €15 million or 3% for other provider and deployer obligations. These figures are ceiling numbers — enforcement practice against small non-EU businesses is still developing. But fines accrue from the date of the violation, not the date of a complaint, which means the baseline compliance stack is not a “when we get audited” task. It is a continuous one.

GDPR for Ukrainian sellers — the targeting test, not the IP test

GDPR does not care where your server is or where your business is incorporated. It cares where your customers are and whether you are actively going after them. GDPR Article 3(2) applies to non-EU controllers — including Ukrainian ones — who offer goods or services to EU persons, or who monitor the behaviour of EU persons.

There is a common overcorrection worth flagging: “I have one EU customer, so now GDPR governs my entire business.” That is not accurate either. GDPR applies to the processing of EU residents’ personal data — and that processing needs to be connected to an active targeting of the EU market, not purely accidental traffic. The test is targeting plus actual processing of EU residents’ data. If you run a paid campaign into France and collect emails from it, you are processing EU personal data on the basis of active targeting. GDPR applies to that data set.

For a Ukrainian SMB selling into the EU that uses AI tools in its marketing or operations, the minimum compliance stack looks like this:

  1. DPA (Data Processing Agreement) with AI providers. If EU personal data passes through ChatGPT, Google Workspace AI features, or Anthropic Claude, you need a signed DPA with each provider. OpenAI offers a self-serve DPA in account settings. Google (Workspace, Vertex AI) processes it through support depending on the product tier. Anthropic handles it through its sales and support channel for commercial use — it is not a one-click process.
  2. Privacy notice listing your AI tools explicitly. Supervisory authorities in both the EU and UK now expect privacy notices to name the AI tools you use and describe what data flows through them. “We may use third-party processors” is no longer sufficient if AI is a meaningful part of your data processing.
  3. Lawful basis identified for each processing activity. For marketing to EU customers, consent (active opt-in, not pre-ticked) is the most common basis. For CRM processing of existing customer data, legitimate interest may apply — but it requires a balancing test, not just an assumption.
  4. Records of Processing Activities (ROPA) if you process EU personal data systematically — for example, running AI on a customer database or routing contact form submissions through an AI pipeline.
  5. EU Representative (GDPR Article 27). If you regularly process EU personal data on a large scale, you may need a designated EU representative — a person or entity based in the EU who handles GDPR communications on your behalf. Flag this one with a lawyer; the threshold for “regularly and large scale” is not defined numerically and depends on your processing volume.

GDPR fines reach up to €20 million or 4% of global annual turnover — whichever is higher. Enforcement against non-EU entities has happened (Clearview AI being the most cited example across multiple EU member states). The size of the fine depends on scale, intent, and cooperation — but ignoring GDPR because you are not in the EU is not a defence.

UK GDPR and ICO guidance — a separate regime with a similar shape

After Brexit, the UK has its own data protection regime: UK GDPR plus the Data Protection Act 2018. It mirrors the EU’s GDPR closely but is a legally separate framework. If you have both EU and UK customers, you may need to address both regimes — and they are not automatically interchangeable.

The UK’s data protection regulator is the ICO (Information Commissioner’s Office). The ICO has published dedicated guidance on AI and data protection that goes into more detail than most EU supervisory authorities have so far. The ICO guidance covers fairness in AI, transparency, data minimisation, and individual rights in automated decision-making. It is practical and worth reading even if your primary market is the EU — the principles align closely.

Practically, if you process UK residents’ data separately from EU residents’ data, you may need separate notifications or representations. For Ukrainian SMBs just entering the UK market with modest volumes, this is worth flagging to a lawyer rather than assuming EU compliance automatically covers it.

One important distinction: the UK does not have an AI Act equivalent. The UK approach relies on existing data protection law (UK GDPR + DPA 2018), consumer protection law, and sectoral regulators. A UK AI Bill has been discussed, but as of April 2026 it is not law. That means UK AI governance is currently more fragmented — less predictable, but also less prescriptive for most SMB use cases. The ICO, ASA (Advertising Standards Authority), and the FCA (Financial Conduct Authority, for anything touching financial products or advice) are the relevant bodies depending on your sector.

The ICO has been active on AI enforcement: its fine against Clearview AI — for collecting UK facial images without consent — reached £7.5 million. Clearview is an extreme case, but it demonstrates that the ICO is willing to pursue non-UK entities for UK data protection violations. Verify the current fine status with a lawyer as enforcement actions can be appealed and figures change.

AI-generated content ownership when you sell internationally

If you create content with AI tools — copy, images, video — and use it across EU, UK, and US markets, you need to understand that intellectual property rules differ significantly between jurisdictions. The short version: layer human creative input on top of AI output if you want strong, jurisdiction-portable rights.

Here is a quick map by jurisdiction relevant to Ukrainian SMBs operating internationally:

Ukraine (Law 2811-IX, in force 1 December 2022): Only a human can be an author. Pure AI-generated output without meaningful human creative input does not qualify for classical copyright protection. The law does provide a separate sui generis (Latin for “of its own kind” — a distinct legal protection category) regime for non-original AI-generated objects, giving the person who organised the generation certain economic rights. This protection is weaker and shorter-lived than full copyright — 25 years versus 70 years post-death for a human author — and carries no moral rights.

EU: There is no harmonised EU position on AI-only authorship. CJEU (Court of Justice of the European Union) case law and member state practice generally require human authorship for copyright protection. Pure AI output with no human creative contribution is likely unprotected in most EU jurisdictions. The specific level of “human creative input” required is not precisely defined and will be shaped by case law over the next few years.

UK (CDPA s.9(3), the Copyright, Designs and Patents Act 1988): The UK is one of the few jurisdictions that explicitly protects “computer-generated works” — defined as works produced by a computer where there is no human author. Protection lasts 50 years from creation. This is a meaningful difference from the EU, and it means AI-generated content may have statutory protection in the UK where it does not in the EU. The scope of this protection is actively debated, and UK law may be revised. Verify current status before relying on this in contracts.

US (USPTO guidance 2023, post-Thaler v. Vidal): Pure AI-only output is not copyrightable in the US. Human authorship is required. The US Copyright Office has stated it will evaluate AI-assisted works case by case, focusing on whether and how much human creative expression is present.

On ToS rights: OpenAI, Anthropic, and Google each assign output rights to the user in their terms of service. That is a contractual right, not statutory copyright. It means you can use the output commercially under the contract — but that contract right is not the same as a copyright claim you can enforce against a third party who copies it, and it does not automatically travel across jurisdictions.

Practical takeaway for cross-border operations: consistently layer human creative contribution on top of AI output — edit, restructure, combine with original ideas. This creates a defensible copyright position in all four jurisdictions simultaneously. Document your creative process for any AI-assisted work you intend to protect or license internationally.

AI in marketing and advertising — disclosure rules across markets

Disclosure rules for AI-generated advertising content are arriving in stages, and platform rules are already ahead of legislation in some areas. The short version: deepfakes of real people are actionable now across all four jurisdictions. EU synthetic content labelling arrives in August 2026. Platform rules on AI-altered political and social content are already live.

EU AI Act Article 50 — in force from 2 August 2026: Synthetic AI-generated audio, video, images, and text used in ways that could mislead people must carry machine-readable marking. Article 50 includes carve-outs for minor editing assistance, clearly artistic or satirical use, and other specific cases — with the precise scope of those carve-outs to be defined in implementing acts that are still being drafted. If you run AI-generated creative into EU audiences — display ads, video, social — this will apply to you. Start designing your content production process with marking capability built in, so you are not retrofitting in July 2026.

UK: There is no equivalent statutory AI content labelling rule yet. The ASA (Advertising Standards Authority) and the ICO both expect transparency around AI use in advertising, particularly where it affects the fairness or accuracy of claims. The ASA has existing rules on misleading advertising that can be applied to AI-generated content — particularly if AI is used to fabricate testimonials, product results, or people’s appearances. No blanket “label every AI ad” rule exists as of April 2026.

Platform rules already in force — these apply regardless of legislation:

  • Meta: requires disclosure for AI-altered or AI-generated political, social issue, and election advertising. This is enforced at the platform level when you run such campaigns.
  • Google Ads: has disclosure requirements for election advertising involving AI-altered content. Google has also announced broader synthetic content policies — check current Google Ads policies before running AI-generated creative in sensitive categories.
  • TikTok: requires labelling of AI-generated or AI-modified content through its Creative Reality Studio label. Applies to all content, not just political.

Deepfakes of real people: Using an AI-generated or AI-altered likeness of a real person in advertising without their explicit written consent is actionable in all four jurisdictions (EU, UK, Ukraine, US) under privacy law, personality rights, or image rights — regardless of where the EU AI Act’s implementing acts land. This is not a “wait and see” area. The legal exposure exists now, under current law, in every market you operate in.

AI chatbots, automation and your role as data controller

The most common misconception I hear from Ukrainian SMB owners who use AI chatbots: “ChatGPT or ManyChat processes the data — they’re responsible, not me.” This is wrong in every jurisdiction that has a data protection framework. You are the data controller. The AI provider is the processor. The responsibility for what data is collected, why, and with what consent sits with you.

A data controller is the entity that determines the purposes and means of processing personal data. If you deploy a chatbot that collects user names, emails, or conversation content — you decided to collect that data, for your purposes. The AI infrastructure running underneath is a processor: it executes your instructions. When a GDPR or UK GDPR supervisory authority investigates a complaint about a chatbot, they go to the controller first.

Three minimums for any AI chatbot or automation that serves EU or UK users:

  1. Disclosure that the user is interacting with AI, not a human. Under EU AI Act Article 50 (in force from 2 August 2026), this disclosure is mandatory for EU users. The ICO and ASA in the UK expect it as a matter of transparency and consumer protection. More practically, users who discover they were not told they were talking to a bot become complainants. Add this to your chatbot greeting — it takes three words.
  2. Privacy notice describing your AI tools and data flows. The notice needs to tell users which AI system processes their conversation, what data is stored and for how long, and who has access. “We use AI tools to improve your experience” is not compliant under either GDPR or UK GDPR as of the current enforcement posture of EU/UK supervisory authorities.
  3. Active opt-in consent before collecting contact data. Not “by continuing this conversation you agree.” Not a pre-ticked checkbox. A clear, affirmative action — a reply “yes”, a button tap, a form submission with an unchecked consent box that the user must check — before you collect their email, phone number, or any other identifier.

If you are building or configuring chatbots or AI automations for EU or UK markets, I configure these three compliance basics from day one. See the AI automation services page for what that looks like in practice, or read the overview of AI automation for small business.

5 things to do this month if you sell into the EU or UK

Regulation is still forming, but five actions close the most significant exposure points for Ukrainian SMBs operating in EU and UK markets. None of these require a lawyer to execute — though you should involve one before making binding compliance declarations.

  1. Audit your AI provider DPAs. Go through every AI tool that touches customer data: ChatGPT (OpenAI self-serve DPA in account settings), Google Workspace AI features (contact support for your tier), Vertex AI (via Google Cloud support), Anthropic Claude (via their sales/support channel for commercial use). If you have EU or UK customers and no DPA with these providers, that gap is open. Close it this week — OpenAI’s is a 15-minute process.
  2. Update your privacy notice to name every AI tool explicitly. List each tool, describe what data flows through it, and state the lawful basis for that processing. Both the ICO and EU data protection authorities have issued guidance stating that generic “third-party processor” language is no longer sufficient when AI is part of the processing chain. If your privacy notice was written before 2024 and has not been updated since, it almost certainly needs revision.
  3. Add the three disclosure elements to every chatbot and AI-facing contact form. AI disclosure greeting (one sentence), privacy notice link with AI tool list, explicit opt-in consent before collecting any contact data. Audit every live touchpoint — website chatbot, Telegram bot, WhatsApp automation, contact forms with AI routing — and verify all three are present.
  4. Remove any deepfake or AI-altered images of real people from your ads. If you are running creative that uses AI-generated likenesses of real people, or AI-modified versions of real photos, without written consent from those individuals — take them down. This applies across all markets and is actionable under current law in all four jurisdictions. The risk increases when EU AI Act Article 50 implementing acts are finalised. Do not wait for that.
  5. Mark two calendar dates and make them someone’s responsibility. First: 2 August 2026 — EU AI Act Article 50 (synthetic content marking obligations enter into force). Second: ongoing — the UK AI Bill and any ICO guidance updates on AI (subscribe to the ICO newsletter). If you also use structured data for AI visibility in your marketing, keep an eye on GEO (Generative Engine Optimisation) developments, which are moving fast alongside the regulatory picture.

Disclaimer: This article is an educational overview for marketers and business owners, not legal advice. Laws, enforcement practice, and their interpretation change; article numbers, dates, and regulatory statuses cited here are accurate as of 28 April 2026 based on publicly available sources, but verify anything material before acting on it. For decisions with legal consequences — contracts, regulatory notifications, compliance declarations — consult a lawyer specialising in EU/UK data protection and AI regulation.

Check your business in 20 minutes — free compliance checklist

PDF “AI Compliance Checklist for Ukrainian SMBs Selling into the EU and UK” — 5 sections: content and authorship, data and GDPR, chatbots and disclosure, your role under the EU AI Act (provider or deployer), when to call a lawyer. No legal jargon — specific questions and actions you can run through yourself.

Get the checklist in Telegram →

Frequently asked questions

Yes, if you actively target EU residents and process their personal data. GDPR Article 3(2) applies to non-EU controllers who offer goods or services to EU individuals, or who monitor EU individuals’ behaviour. The test is targeting — not where your business is incorporated. If you run ads to EU audiences, collect their emails, or track them with analytics pixels, GDPR applies to that processing. Random EU traffic with no intentional targeting is a grey zone, but the moment you run a paid campaign into an EU country, the targeting test is met.

Potentially yes, if the output reaches EU users. Under EU AI Act Article 2, the law applies based on where the output of an AI system is used — not where the business deploying it is based. If you use ChatGPT to create content, chatbots, or automations that serve EU customers, you are a deployer under the Act and deployer obligations apply. Those obligations are lighter than provider obligations, but they include the Article 50 transparency and disclosure requirements (in force from August 2026) and the existing prohibitions on banned AI practices (in force since February 2025). If your ChatGPT use is purely internal with no EU-facing output, direct EU AI Act exposure is lower — but your AI providers still contractually bind you through their ToS to comply with applicable law.

Technically yes — UK GDPR and EU GDPR are separate legal frameworks, and UK data protection is now governed independently by the ICO. In practice, the DPAs offered by major AI providers (OpenAI, Google, Anthropic) typically cover both EU and UK data protection in a single agreement, because the two regimes are closely aligned and providers operate in both markets. Check the scope clause of any DPA you sign to confirm it covers UK processing. If you process meaningful volumes of UK customer data separately, confirm coverage with a lawyer.

It varies significantly. In the EU, copyright generally requires human authorship — pure AI-only output is likely unprotected, though the exact threshold for “human creative input” is unsettled and case law is developing. In the UK, the Copyright, Designs and Patents Act 1988 section 9(3) provides 50-year protection for “computer-generated works” — making the UK one of the few jurisdictions that explicitly recognises AI-only authorship, though the scope of this is debated and may be reformed. In the US, pure AI-only output is not copyrightable per USPTO guidance following the Thaler v. Vidal ruling — human authorship is required. In Ukraine, only humans can be authors; AI output has a weaker sui generis protection for the person who organised the generation. The safe practice across all jurisdictions: add meaningful human creative contribution so you have a defensible copyright claim everywhere.

Not under EU AI Act statutory requirements yet — Article 50 (synthetic content marking) comes into force on 2 August 2026. However, Meta, Google Ads, and TikTok already have their own platform-level disclosure requirements for AI-altered content in certain categories (political, social issue, and election advertising on Meta and Google; all synthetic media on TikTok). These platform rules apply now. Additionally, using AI to generate misleading advertising content — fabricated testimonials, products that do not exist, altered product performance results — is already actionable under existing EU consumer protection and advertising law. Labelling does not protect you from misleading content claims.

Yes, with proper documentation in place. You need: a signed DPA with OpenAI (available self-serve in account settings at platform.openai.com), a privacy notice that names OpenAI as a processor and describes what data you send and why, and a lawful basis for the underlying processing (consent for marketing, legitimate interest for certain CRM operations). If you send anonymised or pseudonymised data where the individual cannot be identified, the GDPR risk is materially lower. Do not send EU customer data through ChatGPT without a DPA in place — that is a direct GDPR violation in the context of EU-targeted operations.

A provider is an entity that develops an AI system and places it on the market — for example, building and selling a proprietary AI model or AI-powered product. A deployer is an entity that uses an existing AI system in its own business operations or services. Most Ukrainian SMBs are deployers: you use ChatGPT, ManyChat, Google Ads smart bidding, or other existing AI tools rather than building your own models. Deployer obligations under the EU AI Act are lighter than provider obligations but are not zero — particularly around prohibited AI practices, transparency disclosures (Article 50, from August 2026), and ensuring AI literacy among staff who work with AI systems in an EU-market context.

Three things you can do without a lawyer this week: sign the OpenAI DPA in your account settings (15 minutes), update your privacy notice to name your AI tools and what data flows through them, and add an AI disclosure line to your chatbot greeting. These three steps close the most common and most visible compliance gaps. The broader stack — EU representative assessment, ROPA, lawful basis documentation for each processing activity, DPAs with Google and Anthropic — takes longer and is worth doing with legal guidance. Download the free PDF checklist via the link above for a full walkthrough.

Share:

Article author

Anastasiia Kyslenko

Digital marketer with 6+ years of experience. I help businesses grow through Google Ads, SEO, analytics, and AI automation.

Subscribe to the blog

Get new articles first. Choose the channel that works for you.

Download the GEO checklist

9 optimization steps for AI search. A checklist you can start using today.

Get it on Telegram →

Also read

Comments

Discuss on Telegram

Leave a comment

Your comment will appear after moderation.

Ready to start?

Let’s discuss your project

Free 30-minute consultation to review your situation and find growth opportunities.

Get in Touch Telegram