Online lecture: AI Responsibility Gaps
AI Accountability Dialogue Series
On 12 February 2026, we are organising the opening online lecture of the AI Accountability Dialogue Series, focusing on the timely topic of “responsibility gaps” in artificial intelligence systems. Our guest speakers will be Daniela Vacek and Jaroslav Kopčan.
Daniela Vacek works at the Institute of Philosophy of the Slovak Academy of Sciences (SAS), the Kempelen Institute of Intelligent Technologies (KinIT), and the Faculty of Arts of Comenius University in Bratislava. She is a laureate of the ESET Science Award 2025 in the category Outstanding Young Scientist in Slovakia under 35. She is a Slovak philosopher specializing in the ethics of artificial intelligence, responsibility, analytic aesthetics, and philosophical logic. She leads an APVV-funded project entitled Philosophical and Methodological Challenges of Intelligent Technologies (TECHNE).
Jaroslav Kopčan works as a research engineer at the Kempelen Institute of Intelligent Technologies (KinIT), where he specializes in natural language processing (NLP) and explainable artificial intelligence (XAI). His research focuses on automated content analysis and explainability techniques for underrepresented languages. He works on the development of interpretable NLP systems and tools, with an emphasis on knowledge distillation.
Date and Time: Thursday, 12 February 2026 | 10:00 CET
Venue: Online | Free participation
The lecture will be conducted in English.
There is an extensive debate on responsibility gaps in artificial intelligence. These gaps correspond to situations of normative misalignment: someone ought to be responsible for what has occurred, yet no one actually is. They are traditionally considered to be rooted in a lack of adequate knowledge of how an artificial intelligence system arrived at its output, as well as in a lack of control over that output. Although many individuals involved in the development, production, deployment, and use of an AI system possess some degree of knowledge and control, none of them has the level of knowledge and control required to bear responsibility for the system’s good or bad outputs. To what extent is this lack of knowledge and control at the level of outputs present in contemporary AI systems?
From a technical perspective, relevant knowledge and control are often limited to the general properties of artificial intelligence systems rather than to specific outputs. Actors typically understand the system’s design, training processes, and overall patterns of behaviour, and they can influence system behaviour through design choices, training methods, and deployment constraints. However, they often lack insight into how a particular output is produced in a specific case and do not have reliable means of intervention at that level.
The lecture will offer several insights into these questions. In addition, we will show that the picture is even more complex. There are different forms of responsibility, each associated with distinct conditions that must be met. Accordingly, some forms of responsibility remain unproblematic even in the case of AI system outputs, while others prove to be more challenging.
Lecture on HPC, AI and Career Opportunities 15 Mar - At the Faculty of Commerce of the University of Economics in Bratislava (EUBA), a guest lecture was held on 10 March 2026 focusing on current trends in high-performance computing (HPC), artificial intelligence (AI), and career opportunities in the digital environment.
AI and Supercomputers in Practice: A Lecture for TUKE Students 13 Mar - On 5 March 2026, an expert lecture by Dr. Lucia Malíčková from the National Supercomputing Centre (NSCC) took place at the Technical University of Košice (TUKE), titled “Can Supercomputers and Artificial Intelligence Be Useful?”. The event was held in lecture hall PK7 in a hybrid format (both online and in person) and was attended by 110 TUKE students.
Accountability by Design: Turning AI Standards into Practice and Certification 12 Mar - Exploring how the EU AI Act translates harmonized standards into real-world AI testing and certification.
Discover how the EU AI Act moves from regulatory text to technical implementation — and what this means for organisations developing and deploying AI systems.
