Accountability by Design: Turning AI Standards into Practice and Certification
Exploring how the EU AI Act translates harmonized standards into real-world AI testing and certification. Discover how the EU AI Act moves from regulatory text to technical implementation — and what this means for organisations developing and deploying AI systems.
This joint session brings together two complementary perspectives on AI accountability: applied AI assessment and certification from Fraunhofer IAIS, and European standardisation from CEN-CENELEC JTC 21. Dr. Maximilian Poretschkin and Dr. Sebastian Hallensleben will explore how the regulatory framework of the EU AI Act translates into technical practice — from testing and certification of AI systems to the development of harmonized standards that guide implementation across Europe.
The webinar highlights the interaction between those who develop the rules and those who must apply them in practice, with particular attention to the implications for SMEs and research institutions working with limited resources.
Date and Time Wednesday, March 18th, 2026 | 16:30 – 17:30 CET Online | Free Registration REGISTRATION: https://forms.office.com/e/j0CUx4dTsM
This webinar is organized by the Slovak National Supercomputing Centre (NCC Slovakia) as part of the EuroCC project (National Competence Centre – NCC Slovakia).
This webinar is organized by the Slovak National Supercomputing Centre (NCC Slovakia) as part of the EuroCC project (National Competence Centre – NCC Slovakia). This session continues the AI Accountability Dialogue Series – “Who Is Responsible for AI in Europe?”, a series exploring how responsibility for artificial intelligence is defined and implemented across ethical, legal, and technical domains.
The webinar will be held in English.
The EU AI Act establishes a comprehensive regulatory framework for artificial intelligence in Europe, but how do its requirements translate into technical reality?
This joint session presents two complementary perspectives on AI accountability. Dr. Maximilian Poretschkin (Fraunhofer IAIS) draws on his experience leading the ZERTIFIZIERTE KI project to discuss the current state of AI assessments — how organizations and AI systems can be evaluated against legal and technical requirements, and what practical certification processes look like today.
Dr. Sebastian Hallensleben (CEN-CENELEC JTC 21) provides the perspective of European standardization, explaining how harmonized standards supporting the AI Act are being developed and how they guide the technical implementation of regulatory requirements.
Together, the speakers explore the interaction between rule-making and practical implementation — and what this means for organisations navigating AI compliance, technical verification, and trust in AI systems.
Dr. Maximilian Poretschkin is Head of Department AI Assurance and Assessment (AAA) at Fraunhofer IAIS in Sankt Augustin, Germany. His work focuses on the testing, evaluation, and certification of trustworthy AI systems. He leads the “ZERTIFIZIERTE KI” project, which develops testing methodologies, assessment tools, and certification approaches for artificial intelligence and transfers the results obtained in AI standardization. With a background in physics and experience in both research and industry consulting, his current work focuses on practical methods for evaluating AI systems, forensic analysis of AI behaviour, and compliance strategies for advanced AI systems such as large language models.
Dr. Sebastian Hallensleben is Chair of the CEN-CENELEC JTC 21 technical committee on Artificial Intelligence and Chief Trust Officer at Resaro Europe. His work focuses on digital trust, AI governance, and the development of harmonized standards supporting the implementation of the EU AI Act across Europe. In addition to his European standardization work, he contributes to international initiatives on AI risk and accountability, including co-chairing the OECD working group on AI risk and accountability. His work focuses on translating ethical and regulatory principles into measurable technical standards for trustworthy AI systems.
Format
- Opening presentation by Dr. Maximilian Poretschkin (15–20 minutes) AI assessment, testing, and certification in practice
- Opening presentation by Dr. Sebastian Hallensleben (15–20 minutes) European standardization framework underpinning the AI Act
- Moderated discussion and audience Q&A (approximately 20–30 minutes)
Urban buildings awaken: Slovak AI gives a second chance to underused spaces 4 Mar - Mestá sú živé organizmy, ktoré sa neustále menia. Mnohí z nás však v susedstve denne míňajú tiché svedectvá minulosti – prázdne školy, nevyužívané úrady či chátrajúce verejné budovy. Často si kladieme otázky: „Prečo je to zatvorené?“ „Nemohol by tu byť radšej denný stacionár, škôlka alebo kultúrne centrum?“
BeeGFS in Practice — Parallel File Systems for HPC, AI and Data-Intensive Workloads 6 Feb - This webinar introduces BeeGFS, a leading parallel file system designed to support demanding HPC, AI, and data-intensive workloads. Experts from ThinkParQ will explain how parallel file systems work, how BeeGFS is architected, and how it is used in practice across academic, research, and industrial environments.
When a production line knows what will happen in 10 minutes 5 Feb - Every disruption on a production line creates stress. Machines stop, people wait, production slows down, and decisions must be made under pressure. In the food industry—especially in the production of filled pasta products, where the process follows a strictly sequential set of technological steps—one unexpected issue at the end of the line can bring the entire production flow to a halt. But what if the production line could warn in advance that a problem will occur in a few minutes? Or help decide, already during a shift, whether it still makes sense to plan packaging later the same day? These were exactly the questions that stood at the beginning of a research collaboration that brought together industrial data, artificial intelligence, and supercomputing power.
