Kategórie
Success-Stories General

When a production line knows what will happen in 10 minutes

Success story: When a production line knows what will happen in 10 minutes

Every disruption on a production line creates stress. Machines stop, people wait, production slows down, and decisions must be made under pressure. In the food industry—especially in the production of filled pasta products, where the process follows a strictly sequential set of technological steps—one unexpected issue at the end of the line can bring the entire production flow to a halt.

But what if the production line could warn in advance that a problem will occur in a few minutes? Or help decide, already during a shift, whether it still makes sense to plan packaging later the same day? These were exactly the questions that stood at the beginning of a research collaboration that brought together industrial data, artificial intelligence, and supercomputing power.

The research was carried out by an international team of experts in artificial intelligence and industrial analytics from both academia and the private sector. The project involved the company Prounion a.s. in cooperation with Constantine the Philosopher University in Nitra, as well as additional academic partners from the Czech Republic and Hungary.

Challenge

Modern production lines generate enormous volumes of data—from machine states and operating speeds to temperatures and production counts. Despite this, key operational decisions are still often made based on experience and intuition.

The researchers focused on a real production line for filled pasta products, where the product passes through a fixed sequence of machines—from raw material preparation, through forming and filling, to thermal processing and packaging. They identified two decisions with a critical impact on production efficiency:

  • Early warning: Is it possible to predict whether the packaging machine will stop within the next 10 minutes?
  • In-shift planning: Can it be reliably determined during the working day whether packaging will still take place later the same day?

Answering these questions required working with large volumes of time-series data while strictly respecting real production conditions—models were allowed to use only the information that is genuinely available at a given moment to an operator or shift supervisor.

Solution

The research team first unified data from all machines into a single time axis and processed it to accurately reflect the real operation of the production line. They then developed machine-learning models that worked exclusively with information available at the given moment—exactly as an operator or shift manager would have it in practice.

A key milestone of the project was access to high-performance computing resources. NSCC Slovakia facilitated access for the research team to the European EuroHPCsupercomputing infrastructure, specifically to the Karolina supercomputer in the Czech Republic. This made it possible to rapidly experiment with different models, test them on real production days, and validate their behavior under conditions close to real industrial practice.

The supercomputer thus became not just a technical tool, but a key driver of innovation, enabling the transition from theoretical analytics to decisions that can be used in real operations.

Results

The model focused on early warning of packaging machine stoppages achieved very high accuracy. It was able to reliably identify situations in which a stoppage was likely within the next 10 minutes, while keeping the number of false alarms to a minimum. This means the alerts are trustworthy and do not overwhelm operators with unnecessary warnings.

The second model, designed for in-shift planning, was able with high reliability to determine whether packaging would still take place later the same day. Managers thus gained a practical basis for decisions related to staffing, work planning, and efficient use of time.

Both approaches share a common principle: they do not predict abstract numbers, but instead answer concrete questions that production teams face every day.

Impact and future potential

This success story shows that artificial intelligence in industry does not have to be a futuristic experiment. When analytics is focused on real operational decisions and supported by the right infrastructure, it can become a quiet and reliable assistant to production.

The solution is easily extendable to other production lines and sectors. Looking ahead, additional data—such as product types, planned maintenance, or shift schedules—can be integrated, allowing models to be even more precisely tailored to the specific needs of companies.

The key message is clear:
When data, artificial intelligence, and supercomputers are aligned with real industrial needs, the result is solutions with immediate practical value.


When a production line knows what will happen in 10 minutes 5 Feb - Every disruption on a production line creates stress. Machines stop, people wait, production slows down, and decisions must be made under pressure. In the food industry—especially in the production of filled pasta products, where the process follows a strictly sequential set of technological steps—one unexpected issue at the end of the line can bring the entire production flow to a halt. But what if the production line could warn in advance that a problem will occur in a few minutes? Or help decide, already during a shift, whether it still makes sense to plan packaging later the same day? These were exactly the questions that stood at the beginning of a research collaboration that brought together industrial data, artificial intelligence, and supercomputing power.
Who Owns AI Inside an Organisation? — Operational Responsibility 5 Feb - This webinar focuses on how organisations can define clear operational responsibility and ownership of AI systems in a proportionate and workable way. Drawing on hands-on experience in data protection, AI governance, and compliance, Petra Fernandes will explore governance approaches that work in practice for both SMEs and larger organisations. The session will highlight internal processes that help organisations stay in control of their AI systems over time, without creating unnecessary administrative burden.
Online lecture: AI Responsibility Gaps 4 Feb - On 12 February 2026, we are organising the opening online lecture of the AI Accountability Dialogue Series, focusing on the timely topic of “responsibility gaps” in artificial intelligence systems. Our guest speakers will be Daniela Vacek and Jaroslav Kopčan.
Kategórie
General

Who Owns AI Inside an Organisation? — Operational Responsibility

Who Owns AI Inside an Organisation? — Operational Responsibility

AI Accountability Dialogue Series
AI Accountability Dialogue Series


As artificial intelligence becomes embedded in everyday organisational processes, a practical question is coming to the foreground under the EU AI Act: who actually owns AI inside an organisation? With increasing reliance on third-party providers, foundation models, and distributed internal roles, traditional notions of ownership and responsibility are no longer sufficient.

This webinar focuses on how organisations can define clear operational responsibility and ownership of AI systems in a proportionate and workable way. Drawing on hands-on experience in data protection, AI governance, and compliance, Petra Fernandes will explore governance approaches that work in practice for both SMEs and larger organisations. The session will highlight internal processes that help organisations stay in control of their AI systems over time, without creating unnecessary administrative burden.

Date and Time:
Tuesday, 3 March 2026 | 10:00 CEST (9:00 PT)
Online | Free Registration

This webinar is organized by the Slovak National Supercomputing Centre as part of the EuroCC project (National Competence Centre – NCC Slovakia) in cooperation with NCC Portugal within the AI Accountability Dialogue Series.

The webinar will be held in English.

Abstract:

The EU AI Act introduces new roles and obligations that reshape how responsibility for AI systems is distributed inside organisations. In practice, however, AI ownership is often fragmented across legal, technical, compliance, data, and business functions, and further complicated by dependence on third-party and foundation models.

This webinar examines how organisations can address these challenges by distinguishing operational responsibility from operational ownership, and by clarifying decision rights and accountability across the AI system lifecycle. It discusses practical governance mechanisms aligned with organisational size and risk, including internal monitoring, documentation, and traceability of AI systems. Particular attention is given to common deployment challenges such as unclear ownership boundaries, reliance on external providers, and the emergence of informal or “shadow” AI use.

Speaker

Petra Fernandes

Lawyer – Data Protection, Artificial Intelligence & Cybersecurity

Petra Fernandes completed her Law Degree in 2003 and has since been advising clients on legal and governance matters related to data protection, artificial intelligence, and cybersecurity. She has served as a Data Protection Officer and as part of DPO teams for both private companies and public administrations.

In addition to advisory work, she regularly delivers training and awareness-raising sessions on data protection and AI governance for public and private sector organisations, with a strong focus on practical implementation and compliance.

Topics Include:

  • AI ownership versus operational responsibility under the EU AI Act
  • Roles and responsibilities of providers, deployers, and internal teams
  • Proportional AI governance models for SMEs and large organisations
  • Internal monitoring, documentation, and traceability of AI systems
  • Managing ownership when using third-party and foundation models
  • Addressing challenges such as shadow AI and informal AI use

Outline:

  1. Introduction: Why AI ownership is more than a legal issue
  2. The different players under the AI Act and their role in AI ownership
  3. Provider and Deployer roles and internal organisational responsibility
  4. Senior management accountability and decision-making authority
  5. Proportional AI governance models
    • Internal monitoring and documentation
    • Mapping AI systems and use cases
    • Embedding responsibility into procurement and development
  6. Challenges in real AI deployments
    • Fragmented ownership and unclear decision rights
    • Dependence on third-party and foundation models
    • Shadow AI and evolving systems
  7. Key priorities for establishing clear AI ownership
  8. Discussion and Q&A

When a production line knows what will happen in 10 minutes 5 Feb - Every disruption on a production line creates stress. Machines stop, people wait, production slows down, and decisions must be made under pressure. In the food industry—especially in the production of filled pasta products, where the process follows a strictly sequential set of technological steps—one unexpected issue at the end of the line can bring the entire production flow to a halt. But what if the production line could warn in advance that a problem will occur in a few minutes? Or help decide, already during a shift, whether it still makes sense to plan packaging later the same day? These were exactly the questions that stood at the beginning of a research collaboration that brought together industrial data, artificial intelligence, and supercomputing power.
Who Owns AI Inside an Organisation? — Operational Responsibility 5 Feb - This webinar focuses on how organisations can define clear operational responsibility and ownership of AI systems in a proportionate and workable way. Drawing on hands-on experience in data protection, AI governance, and compliance, Petra Fernandes will explore governance approaches that work in practice for both SMEs and larger organisations. The session will highlight internal processes that help organisations stay in control of their AI systems over time, without creating unnecessary administrative burden.
Online lecture: AI Responsibility Gaps 4 Feb - On 12 February 2026, we are organising the opening online lecture of the AI Accountability Dialogue Series, focusing on the timely topic of “responsibility gaps” in artificial intelligence systems. Our guest speakers will be Daniela Vacek and Jaroslav Kopčan.
Kategórie
General

Online lecture: AI Responsibility Gaps

Online lecture: AI Responsibility Gaps


AI Accountability Dialogue Series


On 12 February 2026, we are organising the opening online lecture of the AI Accountability Dialogue Series, focusing on the timely topic of “responsibility gaps” in artificial intelligence systems. Our guest speakers will be Daniela Vacek and Jaroslav Kopčan.

Daniela Vacek works at the Institute of Philosophy of the Slovak Academy of Sciences (SAS), the Kempelen Institute of Intelligent Technologies (KinIT), and the Faculty of Arts of Comenius University in Bratislava. She is a laureate of the ESET Science Award 2025 in the category Outstanding Young Scientist in Slovakia under 35. She is a Slovak philosopher specializing in the ethics of artificial intelligence, responsibility, analytic aesthetics, and philosophical logic. She leads an APVV-funded project entitled Philosophical and Methodological Challenges of Intelligent Technologies (TECHNE).

Jaroslav Kopčan works as a research engineer at the Kempelen Institute of Intelligent Technologies (KinIT), where he specializes in natural language processing (NLP) and explainable artificial intelligence (XAI). His research focuses on automated content analysis and explainability techniques for underrepresented languages. He works on the development of interpretable NLP systems and tools, with an emphasis on knowledge distillation.

Date and Time: Thursday, 12 February 2026 | 10:00 CET

Venue: Online | Free participation 

The lecture will be conducted in English.

Registration

There is an extensive debate on responsibility gaps in artificial intelligence. These gaps correspond to situations of normative misalignment: someone ought to be responsible for what has occurred, yet no one actually is. They are traditionally considered to be rooted in a lack of adequate knowledge of how an artificial intelligence system arrived at its output, as well as in a lack of control over that output. Although many individuals involved in the development, production, deployment, and use of an AI system possess some degree of knowledge and control, none of them has the level of knowledge and control required to bear responsibility for the system’s good or bad outputs. To what extent is this lack of knowledge and control at the level of outputs present in contemporary AI systems?

From a technical perspective, relevant knowledge and control are often limited to the general properties of artificial intelligence systems rather than to specific outputs. Actors typically understand the system’s design, training processes, and overall patterns of behaviour, and they can influence system behaviour through design choices, training methods, and deployment constraints. However, they often lack insight into how a particular output is produced in a specific case and do not have reliable means of intervention at that level.

The lecture will offer several insights into these questions. In addition, we will show that the picture is even more complex. There are different forms of responsibility, each associated with distinct conditions that must be met. Accordingly, some forms of responsibility remain unproblematic even in the case of AI system outputs, while others prove to be more challenging.

When a production line knows what will happen in 10 minutes 5 Feb - Every disruption on a production line creates stress. Machines stop, people wait, production slows down, and decisions must be made under pressure. In the food industry—especially in the production of filled pasta products, where the process follows a strictly sequential set of technological steps—one unexpected issue at the end of the line can bring the entire production flow to a halt. But what if the production line could warn in advance that a problem will occur in a few minutes? Or help decide, already during a shift, whether it still makes sense to plan packaging later the same day? These were exactly the questions that stood at the beginning of a research collaboration that brought together industrial data, artificial intelligence, and supercomputing power.
Who Owns AI Inside an Organisation? — Operational Responsibility 5 Feb - This webinar focuses on how organisations can define clear operational responsibility and ownership of AI systems in a proportionate and workable way. Drawing on hands-on experience in data protection, AI governance, and compliance, Petra Fernandes will explore governance approaches that work in practice for both SMEs and larger organisations. The session will highlight internal processes that help organisations stay in control of their AI systems over time, without creating unnecessary administrative burden.
Online lecture: AI Responsibility Gaps 4 Feb - On 12 February 2026, we are organising the opening online lecture of the AI Accountability Dialogue Series, focusing on the timely topic of “responsibility gaps” in artificial intelligence systems. Our guest speakers will be Daniela Vacek and Jaroslav Kopčan.
Kategórie
Calls-Current General

CURRENT FFPLUS CALL

CURRENT FFPLUS CALL

European startups and small and medium-sized enterprises working with artificial intelligence, data, or computationally intensive models currently have the opportunity to take part in an attractive call aimed at supporting innovative research and development. The call targets companies that want to further advance their technological solutions, validate them in real-world conditions, and harness the potential of cutting-edge European supercomputing infrastructure.

It is an opportunity that combines funding, access to state-of-the-art computing resources, and international collaboration, while allowing companies to focus exclusively on research and innovation without the administrative and financial burden typical of many other grant schemes. The call supports applied research with clear innovation potential and is particularly suitable for enterprises that already have experience in using HPC infrastructure and wish to further build on this expertise.

Grants for startups and SMEs of up to €300,000 for innovations in AI and HPC

Startups and small and medium-sized enterprises can receive a grant of up to €300,000 to implement innovative projects using supercomputers and high-performance computing (HPC).

In brief:

  • funding for innovative research with no requirement for own co-financing and no equity taken,
  • free access to European high-performance computing resources,
  • the possibility to form a consortium with another company or a research institution.

The supported innovation project must last no longer than 10 months and may start no earlier than 1 September 2026. The call targets companies that already have practical experience in using HPC infrastructure.

The call is already open. Applications can be submitted from 3 to 25 February 2026 on a first-come, first-served basis. The maximum number of submitted projects is limited to 250.

More information

Kategórie
Success-Stories General

AI pomáha zachraňovať ženské životy

Success story: AI Helps Save Women’s Lives

Fear of breast cancer is a silent companion for many women. All it takes is an invitation to a preventive screening, a single phone call from a doctor, or the wait for test results—and the mind fills with questions: “Am I okay?” “What if I’m not?” “Could something be missed?”
Even when screening confirms a negative result, the worries often persist.

That is precisely why it makes sense to seek new ways to detect cancer as early as possible—not to replace doctors, but to help them see more, faster, and with greater confidence. And this is where artificial intelligence enters the story. Not as a sci-fi technology, but as a tool that may one day help protect lives.

A Slovak research team from the University of Žilina has brought together medicine, artificial intelligence, and European supercomputers in a joint project with a clear goal: to improve the accuracy of breast cancer detection and support doctors in the interpretation of mammographic images.

Challenge

Mammography generates enormous volumes of imaging data. A single project may work with hundreds of thousands of images at extremely high resolution. The Slovak team from the University of Žilina worked with more than 434,000 mammograms, representing data on the scale of several terabytes.

At the same time, the team decided to use a foundation model—a massive neural network with nearly a billion parameters, originally developed for general image analysis. Such a model has enormous potential, but it also places extreme demands on computing power, memory, and data processing speed.

It quickly became clear that standard research infrastructure was simply not sufficient for such a volume of computations. Without a supercomputer, the project could not have continued.

Solution

The breakthrough came when the project gained access to the AI Factory VEGA in Slovenia, which is part of the European EuroHPC initiative. For the first time, Slovak medical AI research was able to work on infrastructure with a level of performance it had never had access to before.

On this platform, state-of-the-art NVIDIA H100 graphics accelerators, designed specifically for artificial intelligence, were available. The researchers built a complete technological pipeline there, from processing mammographic images to training the model itself.

First, the data had to be cleaned, optimized, and prepared so it could be loaded efficiently during computation. Then the process of adapting the large AI model began, as it “learned” to understand the subtle details of mammography. This was not a one-off computation—it was an incremental process in which the model improved step by step.

The supercomputer thus became not only a powerful tool but a key partner in research. It made it possible to do what was previously virtually impossible: to train a massive medical AI model at once using an enormous volume of data.

Results

Researchers have shown that artificial intelligence can learn from mammographic images in a way that gradually enables it to distinguish between healthy tissue and changes that may signal a problem. In other words, the system began to learn how to “look” at images in a manner similar to a physician—searching for subtle details and small deviations that can be very difficult for the human eye to notice.

This progress is particularly important because it represents the first step toward enabling artificial intelligence to flag changes that a human might not notice at first glance. It is not about replacing the physician, but about providing a supporting tool that can help clinicians make decisions with greater confidence, especially in borderline and ambiguous cases.

Impact and future potential

If this research continues to be further developed, artificial intelligence could become a silent assistant in preventive screening. It can speed up the evaluation of imaging data, reduce the risk of overlooking subtle changes, and help detect disease at a stage when it is still highly treatable.

Pre ženy to v praxi znamená väčšiu šancu na včasné odhalenie rakoviny a tým aj vyššiu nádej na úplné uzdravenie. Pri negatívnych nálezoch môžu dostať ženy nezávislý a objektívny doplnkový názor a tým si znížia neistotu po skríningu.  Hoci je pred vedcami ešte ďalšia práca, už dnes je jasné, že smer, ktorým sa výskum uberá, má veľký zmysel. Cieľ je jednoduchý, ale silný. Využiť moderné technológie tak, aby pomáhali chrániť zdravie a životy žien. 


When a production line knows what will happen in 10 minutes 5 Feb - Every disruption on a production line creates stress. Machines stop, people wait, production slows down, and decisions must be made under pressure. In the food industry—especially in the production of filled pasta products, where the process follows a strictly sequential set of technological steps—one unexpected issue at the end of the line can bring the entire production flow to a halt. But what if the production line could warn in advance that a problem will occur in a few minutes? Or help decide, already during a shift, whether it still makes sense to plan packaging later the same day? These were exactly the questions that stood at the beginning of a research collaboration that brought together industrial data, artificial intelligence, and supercomputing power.
Who Owns AI Inside an Organisation? — Operational Responsibility 5 Feb - This webinar focuses on how organisations can define clear operational responsibility and ownership of AI systems in a proportionate and workable way. Drawing on hands-on experience in data protection, AI governance, and compliance, Petra Fernandes will explore governance approaches that work in practice for both SMEs and larger organisations. The session will highlight internal processes that help organisations stay in control of their AI systems over time, without creating unnecessary administrative burden.
Online lecture: AI Responsibility Gaps 4 Feb - On 12 February 2026, we are organising the opening online lecture of the AI Accountability Dialogue Series, focusing on the timely topic of “responsibility gaps” in artificial intelligence systems. Our guest speakers will be Daniela Vacek and Jaroslav Kopčan.
Kategórie
General

VICE and the Digital Twin at the Pre-Christmas Hydrogen Infoday

VICE and the Digital Twin at the Pre-Christmas Hydrogen Infoday



Dňa 10. decembra 2025 sme sa zúčastnili podujatia Predvianočný vodíkový Infoday v Bratislave. Súčasťou programu bola prezentácia VICE – Vertical Integrated Cyclic Energy, Hydrogen, vedená Laurie Farmerom.

The presentation was prepared and delivered jointly, with each member of the team presenting their respective expert section:

  • Laurie Farmer vysvetlil technologický koncept ukladania prebytočnej energie do vodíka a jeho význam pre energetické systémy.
  • Lucia Malíčková vysvetlila ako bol Farmerov koncept integrovaný do digitálneho dvojčaťa, ktoré umožňuje simuláciu rôznych scenárov, analýzu prevádzky a optimalizáciu energetických cyklov.

Prezentácia demonštrovala:

  • the potential of hydrogen as a stabilizing element within the energy infrastructure,
  • the possibilities for efficient storage and utilisation of energy surpluses,
  • the advantages of linking energy concepts with HPC simulations through the digital twin,
  • the practical application of modelling for future energy and industrial solutions.

The event brought a wealth of insightful discussions and confirmed the growing importance of connecting hydrogen technologies with digital solutions. We look forward to further collaboration with partners from the energy, research, and industrial sectors, as well as to new opportunities that will allow us to continue advancing innovative approaches to sustainable energy.

When a production line knows what will happen in 10 minutes 5 Feb - Every disruption on a production line creates stress. Machines stop, people wait, production slows down, and decisions must be made under pressure. In the food industry—especially in the production of filled pasta products, where the process follows a strictly sequential set of technological steps—one unexpected issue at the end of the line can bring the entire production flow to a halt. But what if the production line could warn in advance that a problem will occur in a few minutes? Or help decide, already during a shift, whether it still makes sense to plan packaging later the same day? These were exactly the questions that stood at the beginning of a research collaboration that brought together industrial data, artificial intelligence, and supercomputing power.
Who Owns AI Inside an Organisation? — Operational Responsibility 5 Feb - This webinar focuses on how organisations can define clear operational responsibility and ownership of AI systems in a proportionate and workable way. Drawing on hands-on experience in data protection, AI governance, and compliance, Petra Fernandes will explore governance approaches that work in practice for both SMEs and larger organisations. The session will highlight internal processes that help organisations stay in control of their AI systems over time, without creating unnecessary administrative burden.
Online lecture: AI Responsibility Gaps 4 Feb - On 12 February 2026, we are organising the opening online lecture of the AI Accountability Dialogue Series, focusing on the timely topic of “responsibility gaps” in artificial intelligence systems. Our guest speakers will be Daniela Vacek and Jaroslav Kopčan.
Kategórie
General

Strengthening Slovak–Romanian Cooperation and the Development of Scientific Partnership

Strengthening Slovak–Romanian Cooperation and the Development of Scientific Partnership



Romania’s National Day is a significant historical milestone commemorating the Great Union of 1918, when Transylvania, Bessarabia, and Bukovina united with the Kingdom of Romania. This moment laid the foundations of the modern Romanian state and remains a powerful symbol of national identity and unity to this day.

On this occasion, a ceremonial reception was held, bringing together representatives of diplomacy, public institutions, and the scientific community. The event was also attended by Lucia Malíčková, Representative of the National Supercomputing Centre. During the reception, she met with the President of the Slovak Academy of Sciences, Mgr. Martin Venhart, DrSc., as well as with His Excellency, the Ambassador of Romania, Calin Fabian.

The meeting with Martin Venhart marked an important step toward strengthening cooperation between the National Supercomputing Centre and the Slovak Academy of Sciences. Both institutions expressed interest in closer collaboration, particularly in the areas of promoting science and technology, supporting research, and preparing joint innovative projects. This cooperation holds significant potential to contribute to the development of the Slovak research and innovation ecosystem and to the integration of high-performance computing, artificial intelligence, and academic research.

The event also reaffirmed the importance of international cooperation and the shared ambition to further deepen Slovak–Romanian relations, particularly in the strategically important fields of high-performance computing and artificial intelligence.

The reception thus became not only a celebration of Romania’s national heritage, but also an important platform for professional dialogue, networking, and the creation of future partnerships with the potential to deliver concrete benefits for science, innovation, and technological development in both countries.

When a production line knows what will happen in 10 minutes 5 Feb - Every disruption on a production line creates stress. Machines stop, people wait, production slows down, and decisions must be made under pressure. In the food industry—especially in the production of filled pasta products, where the process follows a strictly sequential set of technological steps—one unexpected issue at the end of the line can bring the entire production flow to a halt. But what if the production line could warn in advance that a problem will occur in a few minutes? Or help decide, already during a shift, whether it still makes sense to plan packaging later the same day? These were exactly the questions that stood at the beginning of a research collaboration that brought together industrial data, artificial intelligence, and supercomputing power.
Who Owns AI Inside an Organisation? — Operational Responsibility 5 Feb - This webinar focuses on how organisations can define clear operational responsibility and ownership of AI systems in a proportionate and workable way. Drawing on hands-on experience in data protection, AI governance, and compliance, Petra Fernandes will explore governance approaches that work in practice for both SMEs and larger organisations. The session will highlight internal processes that help organisations stay in control of their AI systems over time, without creating unnecessary administrative burden.
Online lecture: AI Responsibility Gaps 4 Feb - On 12 February 2026, we are organising the opening online lecture of the AI Accountability Dialogue Series, focusing on the timely topic of “responsibility gaps” in artificial intelligence systems. Our guest speakers will be Daniela Vacek and Jaroslav Kopčan.
Kategórie
General

We Participated in the High-Level Summit on AI in Bratislava

We Participated in the High-Level Summit on AI in Bratislava



We participated in the prestigious High-Level Summit on AI – BratislavAI Forum 2025, one of the most significant current European contributions to shaping the global architecture of AI governance. Lucia Malíčková represented our expertise in artificial intelligence, data ecosystems, and digital innovation.

The BratislavAI Forum is taking place during a period of profound global transformation. The rapid development of artificial intelligence and digital technologies is reshaping economies, industries, education, science, and everyday life. Yet despite the immense opportunities, one essential question stands before us: Are we, as a society, truly prepared for these changes?

The event builds on the AI Action Summit in Paris (February 2025) and paves the way for the High-Level Global AI Conference in India (2026). In doing so, the Slovak Republic positions itself among the active partners contributing to the development of a unified, responsible, and secure global AI ecosystem.

The summit emphasized the need to connect technological progress with the development of human capabilities, critical thinking, and inclusive approaches. The discussions brought together insights from senior government officials, international organizations, technology leaders, and digital regulation experts.

One of the key themes was the fact that the world still lacks a unified framework for the regulation and governance of artificial intelligence. This gap results in fragmented policies, inconsistent development standards, and increased risks of misuse.

The discussions focused on the implementation of:

  • UN Global Digital Compact,
  • OECD AI Principles,
  • UNESCO Recommendation on AI,
  • as well as other multilateral initiatives aimed at strengthening digital trust, security, and accountability.

It was highlighted that only a coordinated international approach can fully unlock the potential of AI in a way that benefits everyone—regardless of geography or socio-economic conditions.

The second main thematic area focused on the economic impacts of artificial intelligence. Experts from central banks, the technology sector, and the legal field discussed how AI is transforming:

  • productivity,
  • the labor market,
  • investment flows,
  • regulation and public policy,
  • and the trust between citizens, the state, and technologies.

The aim was to identify the right balance between innovation and the protection of public interest, while creating an environment that supports inclusive economic growth.

The participation of Lucia Malíčková strengthened our position in discussions on digital transformation, artificial intelligence, and the safe use of data. Her contributions reaffirmed our commitment to actively shaping modern AI ecosystems and supporting international cooperation in this field.

The High-Level Summit on AI in Bratislava enabled us to build new partnerships, share our expertise, and contribute to the discussion on what responsible, secure, and inclusive AI development should look like in Europe and globally.

When a production line knows what will happen in 10 minutes 5 Feb - Every disruption on a production line creates stress. Machines stop, people wait, production slows down, and decisions must be made under pressure. In the food industry—especially in the production of filled pasta products, where the process follows a strictly sequential set of technological steps—one unexpected issue at the end of the line can bring the entire production flow to a halt. But what if the production line could warn in advance that a problem will occur in a few minutes? Or help decide, already during a shift, whether it still makes sense to plan packaging later the same day? These were exactly the questions that stood at the beginning of a research collaboration that brought together industrial data, artificial intelligence, and supercomputing power.
Who Owns AI Inside an Organisation? — Operational Responsibility 5 Feb - This webinar focuses on how organisations can define clear operational responsibility and ownership of AI systems in a proportionate and workable way. Drawing on hands-on experience in data protection, AI governance, and compliance, Petra Fernandes will explore governance approaches that work in practice for both SMEs and larger organisations. The session will highlight internal processes that help organisations stay in control of their AI systems over time, without creating unnecessary administrative burden.
Online lecture: AI Responsibility Gaps 4 Feb - On 12 February 2026, we are organising the opening online lecture of the AI Accountability Dialogue Series, focusing on the timely topic of “responsibility gaps” in artificial intelligence systems. Our guest speakers will be Daniela Vacek and Jaroslav Kopčan.
Kategórie
General

NCC Slovakia on Business Mission in Portugal During the State Visit of the Slovak President

NCC Slovakia on Business Mission in Portugal During the State Visit of the Slovak President

On 26–27 November 2025, the National Supercomputing Centre (NSCC Slovakia) took part in a business mission to Portugal held on the occasion of the state visit of the President of the Slovak Republic, Peter Pellegrini. Božidara Pellegrini took part in the mission on behalf of NSCC and also served as the representative of the National Competence Centre for HPC (NCC Slovakia) – the NSCC division responsible for the EuroCC project and for facilitating access of Slovak stakeholders to EuroHPC JU supercomputing resources. The delegation consisted of nearly 20 innovative Slovak companies and institutions active in the fields of digital solutions, information technologies, smart cities, and energy.

In their opening remarks, the Presidents of Portugal and Slovakia highlighted the shared commitment of both countries to innovation, education, and human capital – values that form a natural foundation for strengthening technological cooperation.

Exploring Portugal’s Innovation Ecosystem

The first day of the mission was dedicated to visiting three key innovation hubs in Lisbon. The delegation began at AI Hub by Unicorn Factory Lisboa, a centre supporting startups developing solutions in the field of artificial intelligence. This was followed by a tour of Unicorn Factory Lisboa – Beato Innovation District, one of Europe’s largest technology campuses. In the afternoon, the delegation visited Taguspark, home to more than 160 technology and research companies. These visits provided Slovak participants with valuable opportunities for networking and deepening the technological dialogue.

Slovak–Portuguese Business Forum

The second day centred on the Slovak–Portuguese Business Forum “Green & Smart Futures”, held at Unicorn Factory Lisboa. The forum was opened by the Presidents of both countries, who emphasised the importance of “building bridges of innovation between the Atlantic and the heart of Europe.” The programme included presentations on the investment environment, contributions from SARIO and AICEP, signing of cooperation memoranda, and, above all, intensive B2B meetings during which companies identified technological synergies and discussed potential future collaborations.

Linking the Mission to the Role of NCC Slovakia

The participation of NCC Slovakia confirmed the growing interest of Slovak companies in solutions based on artificial intelligence, simulations, and work with large datasets – areas that naturally require high-performance computing resources.

Discussions during the B2B meetings showed that an increasing number of companies encounter infrastructure limits when developing AI models or processing large-scale data. NCC Slovakia helps them identify where HPC can deliver the greatest added value and guides them in designing projects that can effectively leverage advanced computational resources.

Within the EuroCC project, NCC Slovakia also provides educational support through free courses and expert webinars, and enables Slovak companies, universities, and research organisations to gain access to European supercomputing capacities via EuroHPC JU access calls. In this way, NCC Slovakia remains a key partner for Slovak innovation and research, helping transform technological ambitions into concrete projects powered by modern HPC resources.

When a production line knows what will happen in 10 minutes 5 Feb - Every disruption on a production line creates stress. Machines stop, people wait, production slows down, and decisions must be made under pressure. In the food industry—especially in the production of filled pasta products, where the process follows a strictly sequential set of technological steps—one unexpected issue at the end of the line can bring the entire production flow to a halt. But what if the production line could warn in advance that a problem will occur in a few minutes? Or help decide, already during a shift, whether it still makes sense to plan packaging later the same day? These were exactly the questions that stood at the beginning of a research collaboration that brought together industrial data, artificial intelligence, and supercomputing power.
Who Owns AI Inside an Organisation? — Operational Responsibility 5 Feb - This webinar focuses on how organisations can define clear operational responsibility and ownership of AI systems in a proportionate and workable way. Drawing on hands-on experience in data protection, AI governance, and compliance, Petra Fernandes will explore governance approaches that work in practice for both SMEs and larger organisations. The session will highlight internal processes that help organisations stay in control of their AI systems over time, without creating unnecessary administrative burden.
Online lecture: AI Responsibility Gaps 4 Feb - On 12 February 2026, we are organising the opening online lecture of the AI Accountability Dialogue Series, focusing on the timely topic of “responsibility gaps” in artificial intelligence systems. Our guest speakers will be Daniela Vacek and Jaroslav Kopčan.
Kategórie
General

Invitation to the Online Lecture: Building Europe’s Sovereign AI – The Bielik.AI Case

Invitation to the Online Lecture: Building Europe’s Sovereign AI – The Bielik.AI Case



We invite you to join the online lecture “Building Europe’s Sovereign AI – The Bielik.AI Case”, taking place on 11 December 2025 at 10:00. The event is open to experts, practitioners, and anyone interested in artificial intelligence, European technological sovereignty, and open-source AI initiatives.

Europe can build sovereign artificial intelligence without isolation—by ensuring control, portability, and auditability across compute infrastructure, data, models, and deployment.

A compelling example is Bielik.AI, an open, community-driven family of AI models developed in Poland. Within just 18 months, nine models have been released, focusing on EU languages and embedding safety and transparency through features such as Bielik Guard. The lecture will illustrate practical steps toward a credible European path to AI sovereignty.

Katarzyna Z. Staroslawska
AI and HPC specialist engaged in European-scale initiatives in the fields of artificial intelligence, high-performance computing, and AI governance

11. december 2025
10:00
Online (link will be sent to registered participants!

Registration

When a production line knows what will happen in 10 minutes 5 Feb - Every disruption on a production line creates stress. Machines stop, people wait, production slows down, and decisions must be made under pressure. In the food industry—especially in the production of filled pasta products, where the process follows a strictly sequential set of technological steps—one unexpected issue at the end of the line can bring the entire production flow to a halt. But what if the production line could warn in advance that a problem will occur in a few minutes? Or help decide, already during a shift, whether it still makes sense to plan packaging later the same day? These were exactly the questions that stood at the beginning of a research collaboration that brought together industrial data, artificial intelligence, and supercomputing power.
Who Owns AI Inside an Organisation? — Operational Responsibility 5 Feb - This webinar focuses on how organisations can define clear operational responsibility and ownership of AI systems in a proportionate and workable way. Drawing on hands-on experience in data protection, AI governance, and compliance, Petra Fernandes will explore governance approaches that work in practice for both SMEs and larger organisations. The session will highlight internal processes that help organisations stay in control of their AI systems over time, without creating unnecessary administrative burden.
Online lecture: AI Responsibility Gaps 4 Feb - On 12 February 2026, we are organising the opening online lecture of the AI Accountability Dialogue Series, focusing on the timely topic of “responsibility gaps” in artificial intelligence systems. Our guest speakers will be Daniela Vacek and Jaroslav Kopčan.