ConsentLens

EU AI Act Article 50: Transparency Obligations for AI-Powered Websites

Regulatory FrameworksUpdated April 2026

The EU Artificial Intelligence Act (Regulation (EU) 2024/1689) introduces a new layer of digital transparency obligations alongside GDPR. Article 50 specifically targets transparency for users interacting with AI systems — requiring that websites deploying chatbots, AI-generated content, and deepfake-style synthetic media clearly disclose this to users. For website operators, Article 50 means that embedding a chatbot widget, using an AI-powered recommendation engine, or publishing AI-generated content without disclosure creates a new category of regulatory risk distinct from, and in addition to, GDPR cookie consent obligations. This document explains who must comply, what must be disclosed, and how ConsentLens detects AI Act compliance issues.

What the EU AI Act Is

The EU AI Act (Regulation (EU) 2024/1689) is the world's first comprehensive legal framework specifically regulating artificial intelligence systems. It entered into force on 1 August 2024 and applies progressively: prohibitions on unacceptable risk AI practices became applicable in February 2025; obligations for high-risk AI systems apply from August 2026; and the full framework for general purpose AI and transparency obligations under Article 50 apply from August 2026.

The Act uses a risk-based classification system. AI systems are categorised as unacceptable risk (prohibited), high risk (requiring conformity assessment and registration), limited risk (subject to transparency obligations), and minimal risk (no mandatory requirements). Most AI tools that website operators deploy — chatbots, recommendation systems, AI-generated content tools — fall into the limited risk category, subject specifically to the transparency requirements in Article 50.

The AI Act applies to providers, deployers, importers, and distributors of AI systems placed on the EU market or used in the EU. For website operators, the relevant role is 'deployer' — an organisation that uses an AI system under its own authority for a specific purpose. Deploying a third-party chatbot widget (such as Intercom AI, Drift, or HubSpot AI features) makes the website operator a deployer subject to Article 50 obligations, regardless of whether they built the underlying AI system.

Article 50: What It Requires

Article 50 imposes three distinct transparency obligations. First: operators of AI chatbot systems — systems designed to interact with natural persons through text — must inform users that they are interacting with an AI system, unless this is obvious from context. The disclosure must be made at or before the start of the interaction. A system presenting itself as a human agent without disclosure is a direct violation.

Second: operators using AI to generate or manipulate audio, image, video, or text content must label that content as AI-generated, unless it is subject to exemptions for clearly creative or satirical purposes, or law enforcement uses. This applies to marketing copy generated by AI tools, product descriptions generated by language models, AI-edited images, and AI-generated video content. The label must be machine-readable as well as human-readable — the Act requires metadata marking alongside visible disclosure.

Third: operators of emotion recognition systems and biometric categorisation systems must inform persons exposed to those systems of their operation. This is less commonly relevant for standard website deployments but applies to retail analytics tools that use facial expression analysis or visitor categorisation based on appearance.

Who Must Comply: Providers vs Deployers

The distinction between AI system providers and deployers is critical for understanding Article 50 obligations. A provider is the company that develops, trains, and places the AI system on the market — for example, OpenAI, Anthropic, or Intercom. A deployer is the organisation that integrates and uses that system in their own product or website. Article 50's transparency obligations fall on both providers and deployers, but for different aspects.

For website operators, the relevant Article 50 obligation is the deployer's duty to inform users about chatbot interactions and AI-generated content. The chatbot provider (such as HubSpot or Intercom) has upstream obligations to make the system's AI nature configurable and disclosable — but the deployer is responsible for actually implementing the disclosure on their website. A website operator cannot rely on the provider's platform to handle disclosure automatically.

Third-party widget deployments are the highest-risk category for compliance failures. When a website embeds a chatbot widget as a JavaScript snippet, the disclosure obligation sits with the website operator — not with the widget vendor. Many widget implementations are configured by default without mandatory disclosure prompts. Website operators must verify their specific configuration and add disclosure if the widget does not provide it by default.

Real-World Case: AI Chatbots Without Disclosure

The most widespread compliance failure observed across scanned websites is the deployment of AI chatbot widgets without any disclosure that the user is interacting with an AI system. Chatbot widgets from major platforms including HubSpot, Intercom, Zendesk, and custom ChatGPT-powered implementations commonly present themselves under human agent names (such as 'Sarah' or 'Alex') without any indication that the responses are AI-generated.

In the pre-AI Act period, this was a business practice choice — some organisations preferred chatbots that presented as human for improved engagement metrics. Article 50 eliminates this option for EU-facing websites. The disclosure requirement is not satisfied by generic footer text stating 'our support uses AI tools'. It requires proactive, contextual disclosure at or before the interaction begins — typically as part of the chatbot's greeting message.

Early enforcement focus has been directed at high-visibility deployments: AI-generated product reviews presented as verified customer reviews, AI-generated news articles presented without any disclosure, and customer service chatbots presenting as named human agents. Regulators have indicated that enforcement of Article 50 will scale from August 2026 when the full enforcement framework activates, but the obligation itself applies from that date.

The Three AI Tool Categories ConsentLens Detects

ConsentLens scans for AI tools across three categories defined by Article 50's transparency requirements. Chatbot tools are identified by their JavaScript fingerprint — the specific global variables, script source patterns, and API endpoint calls used by known chatbot platforms (HubSpot AI, Intercom, Drift, Zendesk, Tidio, Crisp, and custom deployments). When a chatbot tool is detected, the scanner checks the page's visible text for disclosure language.

Recommendation system tools are identified by the tracking calls associated with product recommendation engines, personalisation platforms, and 'you might also like' widget vendors. These systems are subject to Article 50 transparency requirements when they make decisions that are consequential to the user — for example, personalised product rankings that affect pricing or availability. The scanner identifies the presence of these tools and flags the need for disclosure review.

Content generation tools are identified by the presence of platform-specific metadata, hidden field patterns, or API calls associated with AI writing and image generation tools. This category is the hardest to detect automatically because AI-generated content often leaves no technical signature — it is text or images that look like any other text or images. ConsentLens flags the presence of known content generation platform integrations as potential Article 50 review points rather than confirmed violations.

Enforcement Timeline and Penalties

The EU AI Act applies graduated enforcement timelines. The transparency obligations under Article 50 apply from 2 August 2026 — meaning that from this date, deployers of AI systems covered by Article 50 are legally required to implement the prescribed disclosures. Prior to this date, the obligations exist but enforcement is not yet active under the EU framework, though Member State consumer protection law may impose parallel obligations.

Penalties for violations of Article 50 are set at up to €15 million or 3% of worldwide annual turnover, whichever is higher. These penalties are enforced by national AI market surveillance authorities — a new category of regulatory body that member states are required to designate under the Act. The penalty structure for Article 50 transparency violations is deliberately lower than penalties for high-risk AI violations (€30M/6% turnover) but exceeds GDPR Tier 1 penalties.

The AI Act interacts with GDPR in important ways. AI systems that process personal data must comply with both frameworks simultaneously. A chatbot that collects user information during a conversation is subject to GDPR for the data processing component and to Article 50 for the disclosure component. Compliance failures can attract separate enforcement actions under both regulations from different regulatory bodies.

How ConsentLens Scans for AI Act Compliance

During a scan, ConsentLens identifies AI tool deployments through four detection methods: network request analysis (API calls to known AI vendor endpoints), script source detection (recognising known chatbot and recommendation engine script fingerprints), window global variables (JavaScript objects injected by AI platform SDKs), and inline script pattern matching (configuration code that identifies the AI vendor and mode of deployment).

When an AI tool is detected, the scan result includes the tool name, category (chatbot, recommendation, or content generation), and the specific domain or pattern that triggered detection. The result is stored in the 'aiActData' field of the scan record alongside the main GDPR compliance data. This data is displayed in the scan report with an explanation of the relevant Article 50 obligation and a description of what disclosure is required.

AI Act detection is provided as supplementary information rather than as a primary compliance score driver, because the enforcement framework is newer and the obligation applies from August 2026. Website operators are encouraged to review detected AI tools proactively and implement required disclosures before the enforcement deadline. ConsentLens will update its AI Act detection capabilities as the enforcement landscape develops.

Frequently Asked Questions

Does the EU AI Act apply to my website if I only use a third-party chatbot widget?
Yes. Deploying a third-party AI chatbot on your website makes you a deployer under the EU AI Act. Article 50 transparency obligations apply to deployers, not only to AI system providers. You are responsible for ensuring users are informed that they are interacting with an AI system, even if you did not develop the chatbot and even if the widget vendor does not enable disclosure by default.
What is the deadline for Article 50 compliance?
The transparency obligations under Article 50 apply from 2 August 2026. From this date, deployers of AI chatbots and generators of AI content in the EU must implement the required disclosures. Member states are currently designating national market surveillance authorities to enforce the Act. Some member states may impose earlier obligations through consumer protection law.
Does GDPR cover the same things as the EU AI Act?
No, but they overlap for AI systems that process personal data. GDPR governs the lawful processing of personal data — including data collected during AI-powered interactions. The EU AI Act governs the transparency, safety, and risk management of AI systems themselves. A chatbot that collects personal data during a conversation must comply with GDPR for how that data is processed, and with Article 50 for disclosing to users that they are interacting with AI. Both obligations apply simultaneously.

See real scan data

View live compliance reports for websites ConsentLens has already scanned:

Related guides