Legal and ethical aspects of AI and HYPER-AI’s approach

·3 minutes read

·

Banner article

Artificial Intelligence (AI) is transforming industries and unlocking new possibilities across sectors. However, with great power comes great responsibility. As AI becomes increasingly embedded in our daily lives, the legal and ethical challenges surrounding its development and use have never been more pressing. The HYPER-AI project is at the forefront of addressing these concerns, ensuring that innovation proceeds hand-in-hand with accountability and trustworthiness.

A Legal Landscape in Motion

In the European Union, the legal framework governing AI is rapidly evolving.
The General Data Protection Regulation (GDPR) continues to serve as a cornerstone for protecting individuals’ data, especially as AI and Machine Learning models rely on vast datasets for training and operation. Compliance with GDPR means AI systems must ensure transparency, data minimisation, and the right to explanation — all crucial in maintaining public trust.
More recently, the EU AI Act has emerged as a pivotal piece of legislation, providing the most comprehensive and recent legal framework that enshrines harmonised rules for the development and use of AI components throughout the EU. It introduces a risk-based classification of AI systems and its main goal is to ensure that AI systems used in the EU are safe, transparent, and respect fundamental rights.

HYPER-AI’s Proactive Approach

The HYPER-AI project recognises that navigating this complex legal terrain is not just a compliance task, but a foundational pillar for sustainable AI innovation. The project is embedding ethical and legal considerations into the development lifecycle of its AI technologies from the outset. Particular attention is given to adhering to EU legislation, which provides the overarching framework for the project's legal and financial compliance.
Key initiatives include:

  • GDPR Compliance: One of the central obligations under GDPR is the requirement for organisations to provide clear and transparent information to individuals about how their data is collected, processed, and used. GDPR compliance will be considered at all stages of the project and across all entities involved. The principle of collecting only the data necessary for the purpose of processing must be balanced against the data-intensive needs of AI systems. HYPER-AI ensures data protection by design and by default, implementing privacy-preserving techniques such as anonymisation, encryption, and robust data governance protocols.
     
  • Alignment with the AI Act: The project is proactively assessing AI system risks, documenting decision-making processes, and building transparent AI tools to meet the upcoming regulatory requirements. Each of the project’s five Use Case scenarios will be thoroughly examined while being implemented, and it should be noted that none of the Use Cases in HYPER-AI involve AI systems that could, in any way, be classified as 'prohibited' under Article 5 of the AI Act.
     

While some technical limitations remain in achieving full explainability across all tools, HYPER-AI prioritises transparency in the development process, ensuring openness about how systems function and what data they rely on.

Beyond compliance, HYPER-AI promotes fairness, non-discrimination, and inclusivity. As the AI landscape evolves, the project’s commitment to transparency, human-centered design, and proactive governance is helping shape a future where technology serves society, not the other way around.

 

For further updates, visit our website and follow us on LinkedIn, X and Youtube to stay connected with the latest developments in HYPER-AI.