
The EU AI Act: All What You Need to Know
The world of Artificial Intelligence is evolving at an incredible pace, and with that comes the need for a clear, responsible framework to guide its development and use.
Enter the EU AI Act, the world's first comprehensive legal framework for AI, which is set to shape the future of technology not just in Europe, but globally.
Much like the General Data Protection Regulation (GDPR) set a new standard for data privacy, the EU AI Act aims to establish a new gold standard for trustworthy AI. But what does it actually mean, and what are its key principles?
A Risk-Based Approach
The core of the EU AI Act is its risk-based approach. The regulation doesn't treat all AI systems the same. Instead, it classifies them into four categories based on the potential harm they could cause to people's health, safety, and fundamental rights.
-
Unacceptable Risk: These systems are a clear threat to fundamental rights and are outright banned. This includes practices like "social scoring" by governments, some types of biometric categorization, and real-time remote biometric identification in public spaces for law enforcement, with very limited exceptions. The message is clear: certain uses of AI are simply too dangerous to be allowed.
-
High-Risk: These are AI systems with a significant potential to affect people's lives. This category includes AI used in critical infrastructure (like water or electricity networks), education (for grading or assessing students), employment (for sorting job applications), and law enforcement. Providers of high-risk AI systems face strict obligations, including:
- Implementing a robust risk management system.
- Ensuring high-quality data governance.
- Maintaining detailed technical documentation.
- Undergoing a conformity assessment to prove compliance.
- Ensuring human oversight to prevent bias or errors.
Limited Risk: These AI systems are not classified as high-risk, but they require specific transparency obligations. The main rule here is simple: users must be made aware that they are interacting with an AI. This applies to chatbots, deepfakes, and other systems that generate content. The goal is to ensure people can make informed decisions and know when they are not dealing with a human.
Minimal Risk: This category includes the vast majority of AI systems currently in use, such as AI-enabled video games and spam filters. These are considered to pose little or no threat and are largely unregulated, although the EU encourages developers to voluntarily create codes of conduct for these systems.
How a Company Like Sherpa.ai Aligns with the EU AI Act
For companies operating in the AI space, the EU AI Act's focus on privacy, security, and ethics is not a new concept, but a reinforcement of existing principles. Sherpa.ai, a Spanish company specializing in Privacy-Preserving AI, serves as a prime example of how businesses are already building for this new regulatory landscape.
Sherpa.ai’s core offering is its Federated Learning platform. This technology allows organizations to train highly accurate AI models using data that is never centrally collected or shared. Instead, the data remains on-site, behind the owner's firewall. This architectural choice inherently addresses several of the EU AI Act's key concerns, particularly in the "high-risk" category.
-
Privacy by Design: By keeping data decentralized, the platform minimizes data transfer risks and a company’s exposure to data breaches. This aligns directly with the Act's emphasis on data quality and security for high-risk systems.
-
Reduced Bias: Since data is not aggregated in a central repository, it helps maintain data integrity and can help prevent some forms of algorithmic bias that can arise from skewed or unrepresentative centralized datasets.
-
Legal & Ethical Foundation: Sherpa.ai has long focused on building its platform to comply with strict regulations like GDPR and HIPAA. The company’s emphasis on "privacy and security as core values" and its use of advanced Privacy-Enhancing Technologies (PETs) like Differential Privacy and Secure Multi-party Computation naturally positions it to meet the strict technical requirements of the EU AI Act.
For sectors with high-risk applications, such as healthcare and financial services, this approach is particularly valuable. It allows for the development of powerful AI systems—for example, to accelerate clinical trials or detect financial crime—without compromising the sensitive data of individuals.
Key Provisions and Timeline
The EU AI Act officially entered into force in August 2024, but its provisions are being phased in gradually over the coming months and years.
-
February 2025: The first key deadline, with the ban on unacceptable-risk AI systems taking effect.
-
August 2026: Compliance obligations for high-risk AI systems come into effect. This is the big one for many businesses.
-
August 2027: Full compliance across the board for all provisions.
Impact on Businesses and Innovation
The EU AI Act has a broad, extraterritorial reach. If you develop, deploy, or sell an AI system that affects users within the EU, the act will apply to you, regardless of where your company is based.
While the new rules may seem challenging, many experts see them as a positive step. By creating a clear, predictable framework, the Act aims to:
-
Foster Trust: A transparent and accountable AI ecosystem builds confidence with consumers and investors.
-
Encourage Innovation: The creation of "regulatory sandboxes" allows startups and small businesses to test new AI technologies in a controlled environment, fostering responsible innovation.
-
Create a Competitive Advantage: For companies that prioritize safety and ethics from the start, compliance with the EU AI Act can become a significant competitive differentiator in the global market.
The EU AI Act is not about stifling innovation. It’s about ensuring that as AI becomes more integrated into our lives, it is developed and used in a way that is safe, ethical, and respects fundamental human rights. The era of trustworthy AI has begun.