
Ban on Manipulative AI in the EU: Is your AI Crossing the Line?
As the EU AI Act moves closer to full enforcement, many businesses are preparing for transparency obligations and risk classification frameworks. But one part of the regulation is more immediate and potentially more disruptive than most realise: certain AI practices are already banned. Not mitigated, not delayed, prohibited outright.
What may come as a surprise is how broad and context-dependent these prohibitions actually are. Among the banned categories is any AI system that distorts human behaviour through manipulative, deceptive, or subliminal techniques, impairing a person’s ability to make an informed decision, particularly when that decision results in significant harm. While this may sound extreme, the reality is that many common, commercially used AI systems may already fall dangerously close to this line.
Let’s take a specific example: A travel booking platform uses AI to personalise offers and improve conversion rates. The system tracks user behaviour, such as search frequency, device type, time of day, and page hesitation to predict when a customer is likely to book. Once certain patterns are detected, the AI dynamically changes the interface:
• “Only 1 seat left!” messages appear, regardless of actual inventory.
• Lower-cost alternatives are hidden behind filters or extra clicks.
• Non-refundable options are prominently highlighted with persuasive phrases like “Smart choice!”
• The cancellation terms are deprioritised or difficult to access before confirmation.
In this case, the AI system is not simply recommending, it’s steering the user in a specific direction using psychological levers, designed to limit reflection and push for action. The individual may end up making a decision they would not have made under clearer, more neutral conditions, often at a financial cost. If this outcome qualifies as significant harm, and the behavioural distortion was appreciable, the system may fall into a prohibited category under Article 5(1)(a) of the EU AI Act.
What makes this provision especially challenging is that the boundary is not always obvious. A system designed for convenience or optimisation may unintentionally impair user autonomy if it relies on opaque nudging, hidden options, or emotional manipulation. It’s not just about what the system does, it’s about how it influences human decisions, and whether that influence is fair, transparent, and proportionate.
This is why risk assessments are essential. The distinction between persuasive design and unlawful manipulation is subtle, contextual, and highly dependent on the actual effect of the system. Without a structured, expert-led evaluation, even well-meaning developers may overlook where their systems cross into prohibited territory.
At Novius Consulting, we support organisations in navigating this uncertainty with clarity.
Our AI compliance frameworks include:
• Impact assessments
• Risk assessments
• Tailored legal advisory
• Alignment checks with Article 5 prohibitions and other risk categories.
It’s not just about checking boxes, it’s about understanding the real-world behavioural dynamics of your AI system and how those align with legal and ethical boundaries. The EU AI Act was not designed to halt innovation. It was designed to ensure that AI serves people, not the other way around. That begins with respecting the user’s capacity to make decisions freely and without undue influence.In a fast-moving regulatory environment, businesses that take these assessments seriously will not only avoid legal exposure, they will build better products, gain public trust, and future-proof their operations in Europe’s AI market.
At Novius Consulting, we help organisations ensure their AI systems remain both powerful and principled. Because in today’s AI economy, responsibility isn’t a constraint, it’s a competitive advantage.