ROME, April 30, 2026 — The Italian Competition Authority has closed three investigations into generative AI providers after securing commitments aimed at improving transparency around the risk of so-called “hallucinations,” or inaccurate outputs generated by AI systems.
The cases involved DeepSeek, Mistral AI and NOVA AI, and were conducted under Italy’s consumer protection framework governing unfair commercial practices.
Focus on misleading AI outputs
The authority said it had extended its scrutiny to generative AI services in recent months, focusing on whether users are adequately informed about the risk that AI-generated content may be inaccurate or misleading.
The investigations were closed without a formal finding of infringement under the Article 27(7) of the Italian Consumer Code, after the companies offered commitments to address the regulator’s concerns.
New transparency measures
Under the commitments, the companies agreed to introduce a range of measures designed to ensure users are better informed when using AI services.
These include:
- Permanent disclaimers in user interfaces, warning users—particularly in Italian—about the possibility of inaccurate outputs;
- Enhanced pre-contractual information, clarifying that AI-generated content may not always be reliable and should be independently verified;
- Clearer disclosures across websites and apps at different stages of the user journey, including before registration or purchase.
Additional steps by companies
As part of the commitments:
- DeepSeek agreed to invest in technology aimed at reducing hallucination risks, while acknowledging such risks cannot be fully eliminated with current technology;
- NOVA AI committed to clearly explain that its platform acts as an interface aggregating multiple chatbot services, rather than generating or processing responses itself.
Broader regulatory trend
The authority said the commitments reflect a growing focus on consumer protection risks linked to emerging AI technologies, particularly where misleading outputs could influence users’ decisions.
The case marks one of the first instances in which a European competition authority has addressed AI-related transparency issues through commitment-based resolutions rather than formal sanctions.
Source: https://en.agcm.it/en/media/press-releases/2024/4/PS12942-PS12968-PS12973
