Italy's antitrust authority has concluded investigations into three artificial intelligence firms: DeepSeek from China, Mistral AI from France, and Scaleup Yazilim Hizmetleri from Turkey. These inquiries focused on concerns regarding the generation of misleading content, known as AI hallucinations.
The AGCM, Italy's antitrust regulator, announced that the companies have made binding commitments to enhance user awareness about the risks associated with AI-generated inaccuracies. This includes the implementation of permanent disclaimers on their chatbot services.
Key Commitments
As part of their agreements, the firms will:
- Provide clearer information to users about hallucination risks on their websites and applications.
- Add permanent disclaimers to their chatbot services.
Company-Specific Actions
Each company has taken specific steps to mitigate the risks:
- DeepSeek: Will invest in technology aimed at reducing hallucination occurrences, acknowledging the limitations of current technology in completely preventing them.
- Mistral AI: Has committed to improving transparency regarding the reliability of its outputs.
- Scaleup Yazilim: The NOVA AI chatbot service will clarify that it provides a single interface for accessing multiple chatbots without aggregating or processing their responses.
Importance of Transparency
These commitments highlight the growing emphasis on transparency in the AI sector, particularly as concerns about the accuracy of AI outputs increase. By addressing these issues, the companies aim to build trust among users and comply with regulatory expectations.
Next Steps for Users
Users of these AI services should look for the new disclaimers and stay informed about the nature of the content generated by these systems. Understanding the limitations of AI can help mitigate potential misunderstandings.