Registration Open: Code Tuning for the Exascale.
On June 5-7, 2023, in cooperation with the TREX Center of Excellence and the National Competence Centers for HPC from Austria and the Czech Republic, we are organizing a hands-on workshop focused on the development of HPC applications. Participants will be able to try working with tools for performance analysis and identification of bottlenecks, such as MAQAO. (www.maqao.org). The workshop will include practical exercises, it will be possible to work with your own software, analyze it and consult with expert lecturers.
The workshop is targeting specifically code developers and will be focused on code optimisation. Participants are encouraged to bring their own codes to learn about techniques, methods and solutions on how to improve them both in terms of performance and in terms of scalability across multiple platforms.
What are the main goals of this workshop?
- Increasing practical experience in performance, power consumption, and energy efficiency in HPC systems.
- Helping participants to optimise codes to make sure the implementation match their HPC objectives;
- Provide hands-on experience with practical QMC simulations based on CHAMP code.
What can you expect from the workshop?
- During the first day,participants will work together with experts from the Austrian Competence Center for HPC on advanced parallel programming (MPI+X).
- The second day of the program will be devoted to the analysis of the performance of parallel applications, the measurement of energy consumption and the evaluation of energy efficiency on HPC systems under the guidance of experts from the Czech Competence Center for HPC.
- On the last day, participants will explore and analyze the main bottlenecks in node-level application optimization: vectorization, code quality, locality and parallelism. Participants will work under the guidance of experts from TREX CoE.
All topics will be covered in lectures as well as in practical hands-on sessions.The workshop will take place F2F at the SAV campus on Patrónka in Bratislava. Full programme of the workshop will be available in the coming weeks.
The workshop is targeting developers and advanced HPC users with experience in parallel programming and C and/or C++ and/or Fortran programming languages.
Attendees are kindly requested to bring their own laptop.
BeeGFS in Practice — Parallel File Systems for HPC, AI and Data-Intensive Workloads 6 Feb - This webinar introduces BeeGFS, a leading parallel file system designed to support demanding HPC, AI, and data-intensive workloads. Experts from ThinkParQ will explain how parallel file systems work, how BeeGFS is architected, and how it is used in practice across academic, research, and industrial environments.
When a production line knows what will happen in 10 minutes 5 Feb - Every disruption on a production line creates stress. Machines stop, people wait, production slows down, and decisions must be made under pressure. In the food industry—especially in the production of filled pasta products, where the process follows a strictly sequential set of technological steps—one unexpected issue at the end of the line can bring the entire production flow to a halt. But what if the production line could warn in advance that a problem will occur in a few minutes? Or help decide, already during a shift, whether it still makes sense to plan packaging later the same day? These were exactly the questions that stood at the beginning of a research collaboration that brought together industrial data, artificial intelligence, and supercomputing power.
Who Owns AI Inside an Organisation? — Operational Responsibility 5 Feb - This webinar focuses on how organisations can define clear operational responsibility and ownership of AI systems in a proportionate and workable way. Drawing on hands-on experience in data protection, AI governance, and compliance, Petra Fernandes will explore governance approaches that work in practice for both SMEs and larger organisations. The session will highlight internal processes that help organisations stay in control of their AI systems over time, without creating unnecessary administrative burden.