Online lecture: AI Responsibility Gaps
AI Accountability Dialogue Series
On 12 February 2026, we are organising the opening online lecture of the AI Accountability Dialogue Series, focusing on the timely topic of “responsibility gaps” in artificial intelligence systems. Our guest speakers will be Daniela Vacek and Jaroslav Kopčan.
Daniela Vacek works at the Institute of Philosophy of the Slovak Academy of Sciences (SAS), the Kempelen Institute of Intelligent Technologies (KinIT), and the Faculty of Arts of Comenius University in Bratislava. She is a laureate of the ESET Science Award 2025 in the category Outstanding Young Scientist in Slovakia under 35. She is a Slovak philosopher specializing in the ethics of artificial intelligence, responsibility, analytic aesthetics, and philosophical logic. She leads an APVV-funded project entitled Philosophical and Methodological Challenges of Intelligent Technologies (TECHNE).
Jaroslav Kopčan works as a research engineer at the Kempelen Institute of Intelligent Technologies (KinIT), where he specializes in natural language processing (NLP) and explainable artificial intelligence (XAI). His research focuses on automated content analysis and explainability techniques for underrepresented languages. He works on the development of interpretable NLP systems and tools, with an emphasis on knowledge distillation.
Date and Time: Thursday, 12 February 2026 | 10:00 CET
Venue: Online | Free participation
The lecture will be conducted in English.
There is an extensive debate on responsibility gaps in artificial intelligence. These gaps correspond to situations of normative misalignment: someone ought to be responsible for what has occurred, yet no one actually is. They are traditionally considered to be rooted in a lack of adequate knowledge of how an artificial intelligence system arrived at its output, as well as in a lack of control over that output. Although many individuals involved in the development, production, deployment, and use of an AI system possess some degree of knowledge and control, none of them has the level of knowledge and control required to bear responsibility for the system’s good or bad outputs. To what extent is this lack of knowledge and control at the level of outputs present in contemporary AI systems?
From a technical perspective, relevant knowledge and control are often limited to the general properties of artificial intelligence systems rather than to specific outputs. Actors typically understand the system’s design, training processes, and overall patterns of behaviour, and they can influence system behaviour through design choices, training methods, and deployment constraints. However, they often lack insight into how a particular output is produced in a specific case and do not have reliable means of intervention at that level.
The lecture will offer several insights into these questions. In addition, we will show that the picture is even more complex. There are different forms of responsibility, each associated with distinct conditions that must be met. Accordingly, some forms of responsibility remain unproblematic even in the case of AI system outputs, while others prove to be more challenging.
Artificial Intelligence and a Supercomputer as a New Weapon Against Environmental Disasters 26 Mar - Scientists from Nitra, Slovakia are teaching machines to predict industrial failures before they can cause damage. Thanks to collaboration with the European supercomputer LUMI, they have developed a digital “guardian” capable of detecting pipeline leaks or manufacturing faults with high accuracy—helping protect both the environment and companies’ budgets.
The Slovak Recipe for Fair Play and Happier Players 25 Mar - Do you play games on your phone and sometimes feel like the game just doesn’t understand you? Experts from Nitra, Slovakia, have used one of Europe’s most powerful supercomputers to change that. Thanks to the Italian giant named Leonardo, they discovered how to read between the lines of player behavior and make the gaming experience more personal and fair.
Apply for the EUMaster4HPC Summer School 2026 focused on High-Performance Computing. 23 Mar - From 5 to 14 July 2026, the EUMaster4HPC Summer School titled “High-Performance Computing and Emerging Trends” will take place in Luxembourg. The event will be held at the Marienthal Youth Center and the University of Luxembourg in Belval.

















