Artificial intelligence (AI) is rapidly becoming a part of our everyday lives, from chatbots and personal assistants to healthcare diagnostics and financial services. While AI brings incredible innovation, its fast-paced growth has also sparked concerns over privacy, security, and fairness. Recognizing the importance of regulating AI, the european union (EU) recently passed a risk-based framework for overseeing AI technologies, aimed at ensuring safety and accountability.

This new law, which came into effect in August, is designed to make sure AI is used ethically and responsibly. The full details are still being finalized, with Codes of Practice expected soon. However, the law sets tiered obligations for AI developers, requiring them to meet certain standards depending on how risky the AI application is. Over the next few years, these regulations will increasingly affect AI apps that impact everyday users, such as tools that make decisions about jobs, loans, or healthcare.

One major step in evaluating whether AI is compliant with this new law comes from LatticeFlow AI, a company spun out from the renowned research university ETH Zurich. LatticeFlow is focused on making AI safer and more transparent, helping developers ensure their AI models meet the EU’s new legal standards.

LatticeFlow recently introduced Compl-AI, a technical tool designed to assess whether AI models comply with the EU’s regulations. This is part of a collaborative effort with Bulgaria’s Institute for Computer Science, Artificial Intelligence and technology (INSAIT). Essentially, Compl-AI serves as a benchmark suite, helping AI developers test their models against legal requirements.


Why This Matters for the Common Man

The impact of these efforts can be significant for the average person. Here’s why:


Safer AI Apps: With laws that mandate safety and fairness, AI apps will be less likely to cause harm or make unfair decisions. For example, if you’re applying for a loan or a job, AI systems will need to operate transparently and equitably, ensuring no hidden biases affect your chances.


Data Privacy Protection: The EU’s laws emphasize the protection of personal data, ensuring AI systems handle your information responsibly. Tools like Compl-AI will make sure AI developers comply with these data privacy rules.


Increased Trust in AI: With AI models now being monitored for legal compliance, people can have more confidence in the technologies they use daily. Whether you’re using a smart assistant or a medical AI tool, you’ll know these applications have passed strict evaluations.


Better Accountability: If an AI app makes a mistake, it will be easier to trace back the cause and hold developers accountable. This creates a safety net for users, ensuring AI decisions are more reliable and transparent.


As AI continues to evolve, frameworks like the EU AI Act and tools like Compl-AI are paving the way for a safer and more trustworthy future, benefiting everyone who interacts with these technologies. This push for responsible AI development is not just about regulation—it’s about making AI a tool that truly serves the people.

Find out more:

AI