We invite you to the interesting event POP3 Profiling and Optimization Tools 46th VI-HPS Tuning Workshop. The event is organized by POP3 CoE in cooperation with the National Competence Centers for HPC from Slovakia, Czechia, Poland and Austria Hungary and Slovenia.
Virtual Institute—High Productivity Supercomputing (VI-HPS) is an initiative that aims to enhance the productivity of supercomputing applications by providing a comprehensive set of tools and methodologies for performance analysis, debugging, and tuning. It brings together expertise and resources from various organisations to support developing and optimising high-performance computing applications.
The workshop is designed to facilitate collaborative learning and application tuning, with a particular emphasis on teams of two or more participants working with the same or closely related application codes the teams are developing.
The first day of the workshop introduces participants to the POP Centre of Excellence (CoE), detailing its services, methodology, and tools for performance assessments and second-level services.
On the second day, the focus shifts to getting started with open-source multi-platform tools for analysing MPI+OpenMP application executions on CPU architectures.
The third day delves into more advanced usage, including analysing application executions on combined CPU and GPU architectures. During this hands-on workshop, participants will be introduced to the use of Paraver/Extrae and Scalasca/Score-P/CUBE toolsets for CPUs and GPUs.
Paraver/Extrae is a performance analysis toolset designed for tracing and analysing the execution of parallel applications. Extrae captures detailed execution traces, while Paraver provides powerful visualisation and analysis capabilities to help identify performance bottlenecks and optimise parallel code.
Scalasca/Score-P/CUBE is an integrated performance analysis toolkit for parallel applications. Score-P collects performance data in profiles and execution traces, Scalasca analyses and identifies performance issues, and CUBE facilitates exploration of the results, helping developers tune their applications.
Annotation
The course is organised in collaboration with POP3 CoE, NCC Austria, NCC Czechia, NCC Hungary, NCC Poland, NCC Slovakia and NCC Slovenia.
Virtual Institute—High Productivity Supercomputing (VI-HPS) is an initiative that aims to enhance the productivity of supercomputing applications by providing a comprehensive set of tools and methodologies for performance analysis, debugging, and tuning. It brings together expertise and resources from various organisations to support developing and optimising high-performance computing applications.
The workshop is designed to facilitate collaborative learning and application tuning, with a particular emphasis on teams of two or more participants working with the same or closely related application codes the teams are developing. The first day of the workshop introduces participants to the POP Centre of Excellence (CoE), detailing its services, methodology, and tools for performance assessments and second-level services. On the second day, the focus shifts to getting started with open-source multi-platform tools for analysing MPI+OpenMP application executions on CPU architectures. The third day delves into more advanced usage, including analysing application executions on combined CPU and GPU architectures. During this hands-on workshop, participants will be introduced to the use of Paraver/Extrae and Scalasca/Score-P/CUBE toolsets for CPUs and GPUs.
Paraver/Extrae is a performance analysis toolset designed for tracing and analysing the execution of parallel applications. Extrae captures detailed execution traces, while Paraver provides powerful visualisation and analysis capabilities to help identify performance bottlenecks and optimise parallel code.
Scalasca/Score-P/CUBE is an integrated performance analysis toolkit for parallel applications. Score-P collects performance data in profiles and execution traces, Scalasca analyses and identifies performance issues, and CUBE facilitates exploration of the results, helping developers tune their applications.
Additionally, other tools from the POP CoE will be available for participants to utilise throughout the workshop.
Target Audience and Purpose of the Course: Attendees will learn how to use the parallel performance analysis tools of the Performance Optimisation and Productivity (POP) CoE and a corresponding methodology for applying those tools to assess execution performance and scaling efficiency of their own parallel application codes in a portable fashion.
Level Intermediate/advanced, as no knowledge of any parallel performance tools is required (though serial code profiling experience is advantageous). However, participants are expected to be familiar with building/running (potentially hybrid, GPU-enabled) parallel applications.
Course format The hands-on parts will only be available for on-site participants, who should bring their codes to work on.
Výukové/prednáškové časti budú dostupné pre neobmedzený počet účastníkov, ktorí sa môžu zúčastniť online.
Prerequisites Participants should be familiar with one or more parallel programming paradigms, such as MPI and OpenMP (on CPUs), and preferably also the use of OpenMP, OpenACC, CUDA, or similar (for GPUs). When registering for the workshop, participants should report the programming languages and paradigms employed by their application codes, along with relevant framework/library dependencies. Note that applications using AI/ML frameworks such as TensorFlow are unsuitable for this workshop.
Technical requirements Participants with their own application code(s) should have these installed and running on Karolina supercomputer before the event. Also, a representative execution test case should be prepared, suitable for running on a single node in several minutes. The required tools will be available on Karolina (CPU and GPU partitions). However, participants may also install graphical tools on their own notebook computers. Each participant will get access to the mentioned clusters before the event.
Starts: 4.09.2024. 9:00 CET Ends: 6.09.2024 17:00 CET Venue: online and F2F in IT4Innovations v Ostrave
Call for Ideas: Seeking Slovak SME Partners for FFPlus Project Consortium
NCC Slovakia hľadá slovenských MSP partnerov na vytvorenie konzorcia pre návrh prestížneho projektu FFPlusis looking for Slovak SME partners to form a consortium for the prestigious FFPlus project proposal. The aim is to leverage High-Performance Computing (HPC) in addressing specific business challenges comprising e.g. modelling and simulation, data analytics, AI, etc. and achieving significant industrial impact.
The selected SMEs can benefit from our or state-of-the-art European Tier-0 HPC infrastructure, code efficiency optimization and/or parallelization, and domain and technical support. The expected output is a Success story in form of a white paper with no obligation in revealing details of the technical solution, or any other proprietary / IP information or data.
What We Offer:
HPC Infrastructure: Access to state-of-the-art HPC systems.
Technical Support and co-development: Expert guidance in HPC utilization, workflow and code optimization and parallelization.
Application Guidance: NCC Slovakia will guide and accompany partners throughout the application process.
Expected Output:
White Paper: A short success story documenting the business impact achieved through HPC adoption. Note: There is no open science condition for this output.
Key Focus Areas:
Uptake of HPC by SMEs: Targeting businesses with no prior experience in HPC to solve real-world challenges.
Positive Business Impact: Demonstrate how HPC adoption leads to tangible business benefits.
Diverse Application Domains: Prioritizing projects with the highest business impact potential.
Eligibility:
Slovak SMEs: Must have less than 250 employees and an annual turnover of less than 50 million EUR.
Non-Research-Oriented: SMEs should be commercially driven, focus on acedemic / fundamental research is not supported.
Project Details:
Submission Deadline: September 4th, 2024, 17:00 Brussels local time
Project Duration: Maximum of 15 months, starting January 1st, 2025
Funding Budget: Total of €4M for all sub-projects
Maximum Funding per Experiment: Up to 200 K€, up to 150 K€ per organization in the consortium. Main participant, i.e. SME, can participate in only one experiment.
Total maximum number of consortium partners: 5 - main participant and supporting participants
Proposal Expectations:
Alignment: Clearly define the business challenge and the necessity of HPC use.
Impact: Present the potential positive business impact.
Objectives: Set specific, achievable goals.
Consortium: Include all necessary parties for effective project execution.
Resources and Costs: Outline required resources and associated costs.
Data Protection: Address any data protection concerns.
Success Stories: Support in generating publishable success stories.
Submission Guidelines:
Format: Proposals must be submitted in English and comprise two parts: Part A (administrative information) and Part B (proposal body).
Electronic Submission: Proposals must be submitted electronically using the designated submission tool.
Join us in demonstrating the transformative potential of HPC for SMEs. Contact NCC Slovakia today, aby sme mohli vybudovať partnerstvo a spoločne sa uchádzať o tento projekt.
The FFplus project launched a new open call for European small and medium-sized enterprises. They are looking for agile innovative companies that decide to use supercomputers in practice and thus gain a competitive advantage on the market.
The FFplus project is already the fourth continuation of a very successful initiative that directly deals with how to help businesses overcome obstacles in the use of supercomputers and high-performance data analysis in practice or in the work and development of generative AI. The goal is primarily to strengthen the global competitiveness of European industry.
In the past years, dozens of companies from all over Europe that used supercomputers have successfully passed open challenges. Let their stories inspire you, you can find them on the FFplus website.
The FFplus project call is divided into 2 parts:
BUSINESS EXPERIMENTS
The first part of this challenge is intended for businesses with no previous experience with supercomputing across all disciplines. As part of this challenge, companies have the opportunity to submit their "experiments", i.e. projects solving a specific business challenge with the help of supercomputer technologies, high-performance data analysis or artificial intelligence. Estimated duration of the experiment max. 15 months with a planned start on January 1, 2025.
A sum of EUR 4 million will be distributed among all the selected projects for the financing of experiments.
The deadline for submitting applications is September 4, 2024 at 5 p.m.
INNOVATION STUDIES
The second part of the FFplus challenge will support companies and startups that are already active in the field of generative AI and that lack the necessary computing resources to develop their own models. The goal is to facilitate and strengthen the technological development of European companies in the field of AI.
Participating enterprises will be supported to increase their innovation potential by leveraging new generative UI models, such as large language models (LLM), based on their existing expertise, application area, business model and potential for expansion.
Submitted "innovation studies" must use extensive European supercomputing resources (pre-exascale and exascale) to develop and adapt generative AI models (pr. LLM).
A sum of EUR 4 million intended for the financing of experiments will be distributed among all selected sub-projects.
The deadline for submitting applications is September 4, 2024 at 5 p.m.
Are you interested in this opportunity? You can find out more information on the project website.
The experts from the National Competence Center for HPC will be happy to help you with the submission of the project - contact us..
Semi-Supervised Learning in Aerial Imagery: Implementing Uni-Match with Frame Field learning for Building Extraction
Building extraction in GIS (geographic information system) is pivotal for urban planning, environmental studies, and infrastructure management, allowing for accurate mapping of structures, including the detection of illegal constructions for regulatory compliance. Integrating extracted building data with other geospatial layers enhances the understanding of urban dynamics and spatial relationships. Given the scale and complexity of these tasks, there is a growing need to automate building extraction using deep learning techniques, which offer improved accuracy and efficiency in handling large-scale geospatial data.
illustrative image
State-of-the-art image segmentation models primarily output in raster format, whereas GIS applications often require vector polygons. One such method to meet this requirement is Frame Field learning, which addresses the gap between raster format outputs of image segmentation models and the vector format needed in GIS. This approach significantly enhances the accuracy of building vectorization by aligning with ground truth contours and provide topologically clean vector objects.
These models are trained using a 'supervised learning' method, necessitating a large amount of labeled examples for training. However, obtaining such a significant volume of data can be extremely challenging and expensive. A potential solution to this problem is 'semi-supervised learning,' a method that reduces reliance on labeled data. In semi-supervised learning, the model is trained with a mix of a small set of labeled data and a larger set of unlabeled data. Hence, the goal of this collaboration between the Slovak National Competence Center for High-Performance Computing and Geodeticca Vision s.r.o. was to identify, implement, and evaluate an appropriate semi-supervised method for Frame Field learning.
The aim of this cooperation between the National Competence Center for HPC and Geodeticca Vision s.r.o. was to identify, implement and evaluate a suitable partial tutor learning method for Frame Field learning.
Methods
Frame Field learning
The key idea of the frame field learning [1] is to help the polygonization method in solving ambiguous cases caused by discrete probability maps (output from image segmentation models). This is accomplished by introducing an additional output to the neural network of image segmentation, namely a frame field (see. Fig. 1), which represents the structural features and geometrical characteristics of the building.
Frame fields
Frame field is a 4-PolyVector field that assigns four vectors to each point on a plane. Specifically, the first two vectors are constrained to be opposite to the other two, meaning each point is assigned a set of vectors {u, −u, v, −v}. This approach is particularly necessary for buildings, as they are regular structures with sharp corners, and capturing directionality at these sharp corners requires two directions.
Figure 1: Visualization of the frame field output on the image from training set [1].
Frame Field learning
Figure 2: Diagram of the frame field learning [1]
The learning process of frame fields can be summarized as follows:
The network's input is a 3×H×W RGB image.
To generate a feature map, any deep segmentation model could be used, such as U-Net, which is then processed to output detailed segmentation maps.
The training is supervised with ground truth rasterized polygons for interiors and edges, utilizing a mix of cross-entropy and Dice loss for accurate segmentation.
To train the frame field, three losses are used:
Lalign enforces alignment of the frame field to the tangent direction.
Lalign90 prevents the frame field from collapsing to a line field.
Lsmooth measures the smoothness of the frame field.
Additional losses, regularization losses, are introduced to maintain output consistency, aligning the spatial gradients of the predicted maps with the frame field.
Vectorization
Figure 3: Visualization of the vectorization process [1]
The vectorization process transforms classified raster images into vector polygons using a polygonization method using the Active Skeleton Model (ASM). The principle of this algorithm is the iterative shifting of the vertices of the skeleton graph to their ideal positions. This method optimizes a skeleton graph - a network of pixels outlining the building's structure - created by a thinning method applied on a building wall probability map. The iterative shifting is controlled by a gradient optimization method aimed at minimizing an energy function, which includes specific components related to the structure and geometry being analyzed:
Eprobability – fits the skeleton paths to the contour of the building interior probability map at a certain probability threshold, e.g. 0.5
Eframe field align aligns each edge of the skeleton graph to the frame field.
Elength ensures that the node distribution along paths remains homogeneous as well as tight.
UniMatch semi-supervised learning
UniMatch [2], an advanced semi-supervised learning method in the consistency regularization category, builds upon the foundational principles established by FixMatch [3], a baseline method in this domain. primarily operates on the principle of pseudo-labeling combined with consistency regularization.
The basic principle of the FixMatch method involves generating pseudo-labels for unlabeled data from the predictions of a neural network. Specifically, for a weakly perturbed unlabeled input xw , a prediction pwpw is generated, which serves as a pseudo-label for the prediction of xwith, a strongly perturbed input. Subsequently, the loss function value, for example, cross-entropypw, pwithis calculated, considering only areas from pwpw with a probability value greater than a certain threshold, e.g., >0.95.
UniMatch builds upon and extends the FixMatch methodology, introducing two core enhancements:
UniPerb (Unified Perturbations for Images and Features) - This involves applying perturbations at the feature level. Practically, this means applying a dropout function to the output (i.e., the feature) from the encoder layer of the neural network, randomly ignore features, which then proceed to the decoder part of the network, generating pfp.
Instead of using one strong perturbation, two perturbations are utilized. xs1 and xs2.
Figure 4: (a) The FixMatch baseline (b) used UniMatch method. The FP denotes feature pertubation, w and s means weak and strong pertubation, respectively [2].
Ultimately, there are three error functions: crossentropy(pw, pfp), cross-entropy(pw, ps1), cross-entropy(pw, ps2These are then linearly combined with the supervised error function.
Táto metóda v súčasnosti patrí medzi state-of-the-art metódy učenia s čiastočným učiteľom. Hlavnou výhodou tejto metódy je jej jednoduchosť pri implementácií a nevýhodou je jej citlivosť na výber vhodnej slabej a silnej perturbácie.
Integrating UniMatch Semi-Supervised Learning with Frame Field Learning
Implementation Strategy for UniMatch in Frame Field Learning
To integrate UniMatch into our Frame Field learning framework, we first differentiated between weak and strong perturbations. For weak perturbations, we chose basic spatial transformations such as rotation, mirroring, and vertical/horizontal flips. These are well-suited for aerial imagery and straightforward to implement.
For strong perturbations, we opted for photometric transformations. These include adjustments in hue, color, and brightness, providing a more significant alteration to the images compared to spatial transformations.
Incorporating feature perturbation loss was a crucial step. We implemented this by introducing a dropout mechanism between the encoder and decoder parts of the network. This dropout selectively omits features at the feature level, which is essential for the UniMatch approach.
Regarding the dual-stream perturbations of UniMatch, we adapted our model to handle two types of strong perturbations. The dual-stream approach involves using the weak perturbation prediction as a pseudo-label and training the model using the strong perturbation predictions as loss functions. We have two strong perturbations, hence the term 'dual-stream'. Each of these perturbations contributes to the overall robustness and effectiveness of the model in semi-supervised learning scenarios, especially in the context of building extraction from complex aerial imagery.
Prostredníctvom týchto úprav bola UniMatch metóda úspešne integrovaná do Frame Field learning algoritmu, čím sa zvýšila jeho schopnosť efektívne spracúvať a učiť sa z anotovaných a hlavne neanotovaných dát.
Experiments Dataset Labeled Data
Our labeled data comes from three different sources, which we'll detail in the accompanying Table 1.
Table 1: Overview of 3 data sources of labeled data used for training the models with details.
Unlabeled Data
For the unlabeled dataset, we selected high-quality aerial images from Geodetický a kartografický ústav (GKÚ) [6], available for free public use. We specifically targeted a diverse area of 7000 km2ensuring a wide representation of various landscapes and urban settings.
Data Processing: Patching
We processed both labeled and unlabeled images into patches of size 320x320 px. This patch size is specifically chosen to match the input requirements of our neural network. From the labeled data, this process resulted in approximately 55,000 patches. Similarly, from the unlabeled dataset, we obtained around 244,000 patches.
Training setup Model Architecture
We designed our model using a U-Net architecture with an EfficientNet-B4 backbone. This combination provides a good balance of accuracy and efficiency, crucial for handling the complexity of our segmentation tasks. The EfficientNet-B4 backbone was specifically chosen for its optimal balance between memory usage and performance. In Frame Field learning, U-Net architecture has been shown to be highly effective, as evidenced by its strong performance in prior studies.
Training Process
For training, we used the AdamW optimizer, which combines the advantages of Adam optimization with weight decay, aiding in better model generalization. To prevent overfitting, we implemented L2 regularization. Additionally, we used the ReduceLROnPlateau learning rate scheduler. This scheduler adjusts the learning rate based on validation loss, ensuring efficient training progress.
Semi-Supervised Learning Adjustments
A key aspect of our training was adjusting the ratio of unlabeled to labeled patches. We experimented with ratios ranging from 1:1 to 1:5 (labeled:unlabeled). This variability allowed us to explore the impact of different amounts of unlabeled data on the learning process. It enabled us to identify the optimal balance for training our model, ensuring effective learning while leveraging the advantages of semi-supervised learning in handling large and diverse datasets.
Model evaluation
In our evaluation of the building footprint extraction model, we chose metrics that precisely measure how well our predictions align with real-world structures.
Intersection over Union (IoU)
Kľúčovou metrikou, ktorú sme využívali je metrika s názvom Intersection over Union (IoU). Počíta zhodu medzi predikciami modelu a skutočným tvarom budov. Hodnota skóre IoU blízka 1 znamená, že naše predikcie sú podobné skutočným budovám. Táto metrika je nevyhnutná na posúdenie geometrickej presnosti pre segmentované oblasti, pretože odráža presnosť vytýčenia hraníc budov. Okrem toho, vyhodnotením pomeru správne predikovanej oblasti ku kombinovanej oblasti (zjednotenie oblasti predikcie a skutočnej oblasti), nám IoU poskytuje jasnú mieru efektivity modelu v zachytávaní skutočného kontextu a tvaru budov v komplexnej mestskej krajine.
Precision, Recall and F1
Precision measures the accuracy of the model's building predictions, indicating the proportion of correctly identified buildings out of all identified buildings, thereby reflecting the model's specificity. Recall assesses the model's ability to capture all actual buildings, with a high recall score highlighting its sensitivity in detecting buildings. The F1 Score combines precision and recall into a single metric, offering a balanced view of the model's performance by ensuring that high scores result from both high precision and high recall.
Complexity Aware IoU (cIoU)
We also utilized Complexity Aware IoU (cIoU) [7]. This metric addresses a shortfall in IoU by balancing segmentation accuracy and the complexity of the polygon shapes. While IoU alone can lead models to create overly complex polygons, cIoU ensures that the complexity of the polygons (number of vertices) is kept realistic, reflecting the typically less complex structure of real buildings.
N Ratio Metric
The N ratio metric was an additional component of our evaluation strategy. It contrasts the number of vertices in our predicted shapes with those in the actual buildings [7]. This helps in understanding whether our model accurately replicates the detailed structure of the buildings.
Max Tangent Angle Error
To ensure clean geometry in building extraction tasks, accurately measuring contour regularity is essential. The Max Tangent Angle Error (MTAE) [1] metric is designed to address this need by supplementing the Intersection over Union (IoU) metric. It specifically targets the limitation of IoU, where segmentations with rounded corners may receive higher scores than those with more precise, sharp corners. By evaluating the alignment of edges through the comparison of tangent angles at sampled points along predicted and ground truth contours, MTAE effectively penalizes inaccuracies in edge orientation. This focus on edge precision is critical for producing clean vector representations of buildings, emphasizing the importance of accurate edge delineation in segmentation tasks.
Evaluation Process
Natrénované modely boli testované na veľkej dátovej množne leteckých snímok v plnej veľkosti (namiesto malých častí, pomocou ktorých bola sieť trénovaná). Takéto testovanie poskytuje presnejšie zobrazenie reálnych použití takýchto modelov. Na extrakciu budov zo snímok v plnej veľkosti sme použili techniku posuvného okna, čím boli vytvorené predikcie po jednotlivých segmentoch obrázku. Na okraje prekrývajúcich sa segmentov bola použitá pokročilá priemerovacia technika, dôležitá pre minimalizáciu nežiadúcich efektov a zachovanie konzistentnosti v rámci predikčnej mapy. Výstupná predikčná mapa v plnej veľkosti bola následne vektorizovaná do presných vektorových polygónov s použitím algoritmu Active Skeleton Model (ASM).
Results
Tabuľka 2: Výsledky trénovania modelov pre základný prístup (učenie s učiteľom) a prístupy učenia s čiastočným učiteľom s rôznymi podielmi použitých anotovaných a neanotovaných obrázkov.
The results from our experiments, reflecting performance of segmentation model trained under different conditions, reveal significant insights (see Table 2). We evaluated the model's performance in a baseline scenario without semi-supervised learning and in scenarios where semi-supervised learning was applied with varying ratios of labeled to unlabeled data (1:1, 1:3, and 1:5).
IoU: Starting from the baseline IoU of 80.50%, we observed a steady increase in this metric as we introduced more unlabeled data into the training process, reaching up to 85.77% with a 1:5 labeled to unlabeled ratio
2. Precision, Recall, and F1 Score: The precision of the model, which measures how accurate the predictions are, improved from 85.75% in the baseline to 90.04% in the 1:5 ratio setup. Similarly, recall, which indicates how well the model can find all relevant instances, slightly increased from 94.27% to 94.76%. The F1 Score, which balances precision and recall, also saw an improvement from 89.81% to 92.34%. These improvements suggest that the model became more accurate and reliable in its predictions when semi-supervised learning was used.
N Ratio a cIoU: The results show a notable decrease in the N Ratio from 2.33 in the baseline to 1.65 in the semi-supervised 1:5 ratio setup, indicating that the semi-supervised model generates simpler, yet accurate, vector shapes that more closely resemble the actual structures. This simplification likely contributes to the enhanced usability of the output in practical GIS applications. Concurrently, the complexity-aware IoU (cIoU) significantly improved from 48.89% in the baseline to 64.75% in the 1:5 ratio, suggesting that the semi-supervised learning approach not only improves the overlap between the predicted and actual building footprints but also produces simpler vector shapes, which are closer to real-world buildings in terms of geometry.
Mean Max Tangent Angle ErrorMTAE: The Mean MTAE's reduction from 18.60° in the baseline to 17.45° in the 1:5 semi-supervised setting signifies an improvement in the geometric precision of the model's predictions. This suggests that the semi-supervised learning model is better at capturing the architectural features of buildings with more accurately defined angles, contributing to the production of topologically simpler and cleaner vector polygons.
Training on High-Performance Computing (HPC) Machine
HPC Configuration
Our training was conducted on a High-Performance Computing (HPC) machine equipped with substantial computational resources. The HPC had 8 nodes, each outfitted with 4 NVIDIA A100 GPUs with 40GB of VRAM, 64 CPU cores, and 256GB of RAM. For task scheduling, the system utilized Slurm.
PyTorch Lightning Framework
We employed the PyTorch Lightning framework, which offers user-friendly multi-GPU settings. This framework allows the specification of the number of GPUs per node, the total number of nodes, various distributed strategies, and the option for mixed-precision training.
Experiences with Slurm and PyTorch Lightning
When training on a single GPU, our Slurm configuration was as follows: #SBATCH –partition=ngpu #SBATCH –gres=gpu:1 #SBATCH –cpus-per-task=16 #SBATCH –mem=64000
In PyTorch Lightning, we set the trainer as: Trainer:
trainer = Trainer(accelerator=”gpu”, devices=1)
Since, here, we allocated one GPU from four available in one node, we allocated 16 CPUs from 64 available. Therefore, for the data loaders, we assigned 16 workers. Since semi-supervised learning uses two data loaders (one for labeled and one for unlabeled data), we allocated 8 workers to each. It was critical to ensure that the total number of cores for the data loaders did not exceed the available CPUs to prevent training crashes.
Distributed Data Parallel (DDP) Strategy
Using PyTorch Lightning's Distributed Data Parallel (DDP) option, we ensured each GPU across the nodes operated independently:
Each GPU processed a portion of the dataset.
All processes initiated the model independently.
Each conducted forward and backward passes in parallel.
Gradients were synchronized and averaged across processes.
Each process updated its optimizer individually.
With this approach, the total number of data loaders equaled the number of GPUs multiplied by the number of data loaders. For example, in a semi-supervised learning setup with 4 GPUs and two types of data loaders (labeled and unlabeled), we ended up with 8 data loaders, each with 8 workers – 64 workers in total.
To fully utilized one node with four GPU, we used following configurations:
Using PyTorch Lighting, it is possible to leverage multiple nodes on HPC. For instance, using 4 nodes with 4 GPUs each (16 GPUs in total) was configured as:
Correspondingly, the Slurm configuration was set to:
#SBATCH –nodes=4
#SBATCH –ntasks-per-node=4
#SBATCH –gres=gpu:4
These settings and experiences highlight the scalability and flexibility of training complex machine learning models on an HPC environment, especially for tasks demanding significant computational resources like semi-supervised learning in geospatial data analysis.
Training Scalability Analysis
Tabuľka 3: Výsledky trénovania prístupov učenia s učiteľom a učenia s čiastočným učiteľom s 1, 2, 4 a 8 GPU. Pre každú konfiguráciu je uvedený čas na jednu epochu a pomer urýchlenia proti 1 GPU.
In the Training Scalability Analysis, we carefully examined the impact of expanding computational resources on the efficiency of training models, utilizing the PyTorch Lightning framework. This investigation covered both supervised and semi-supervised learning approaches, with a particular emphasis on the effects of increasing GPU numbers, including setups involving 2 nodes (or 8 GPUs).
Figure 5: This graph compares the actual speedup ratios for supervised and semi-supervised learning against the number of GPUs, alongside the ideal linear speedup ratio. It showcases the closer alignment of semi-supervised learning with ideal scalability, emphasizing its greater efficiency gains from increased computational resources.
A key finding from this analysis was that the increase in speedup ratios for supervised learning did not perfectly align with the number of GPUs utilized. Ideally, doubling the number of GPUs would directly double the speedup ratio (e.g., using 4 GPUs would result in a 4x speedup). However, the actual speedup ratios were lower than this ideal expectation. This discrepancy can be attributed to the overhead associated with managing multiple GPUs and nodes, particularly the need to synchronize data across all GPUs, which introduces efficiency losses.
Učenie s čiastočným učiteľom ukázalo mierne iný trend, viac približujúci sa ideálnemu (lineárnemu) nárastu urýchlenia. Zdá sa, že komplexnosť a vyššie výpočtové nároky učenia s čiastočným učiteľom zmierňujú dopad overhead nákladov a tým umožňujú efektívnejšie využívanie viacerých GPU. Napriek výzvam spojeným so synchronizáciou dát cez viacero GPU kariet a výpočtových uzlov, vyššie výpočtové nároky učenia s čiastočným učiteľom umožňujú efektívnejšie škálovanie zdrojov, t.j. urýchlenie bližšie ideálnemu scenáru.
Conclusion
The research presented in this whitepaper has successfully demonstrated the effectiveness of integrating UniMatch semi-supervised learning with Frame Field learning for the task of building extraction from aerial imagery. This integration addresses the challenges associated with the scarcity of labeled data in deep learning applications for geographic information systems (GIS), providing a cost-effective and scalable solution.
Our findings reveal that employing semi-supervised learning significantly enhances the model's performance across several key metrics, including Intersection over Union (IoU), precision, recall, F1 Score, N Ratio, complexity-aware IoU (cIoU), and Mean Max Tangent Angle Error (MTAE). Notably, the improvements in IoU and cIoU metrics underscore the model's increased accuracy in delineating building footprints and generating vector shapes that closely resemble actual structures. This outcome is pivotal for applications in urban planning, environmental studies, and infrastructure management, where precise mapping and analysis of building data are crucial.
The methodology adopted, which combines Frame Field learning with the innovative UniMatch approach, has proven to be highly effective in leveraging both labeled and unlabeled data. This strategy not only improves the geometric precision of the model's predictions but also ensures the generation of cleaner, topologically accurate vector polygons. Furthermore, the scalability and efficiency of training on a High-Performance Computing (HPC) machine using the PyTorch Lightning framework and Distributed Data Parallel (DDP) strategy have been instrumental in handling the extensive computational demands of the semi-supervised learning process on the data at hand, within a time frame ranging from tens of minutes to hours.
Práca zdôrazňuje potenciál učenia s čiastočným učiteľom v zlepšovaní automatickej extrakcie budov z leteckých snímok. Implementácia UniMatch do Frame Field learning metódy predstavuje významný krok vpred, poskytujúc robustné riešenie pre výzvy spojené s nedostatkom dát a potreby vysokej presnosti geopriestorovej dátovej analýzy. Tento prístup zlepšuje efektívnosť a presnosť extrakcie budov, a taktiež otvára nové možnosti pre aplikácie metód učenia s čiastočným učiteľom v GIS a príbuzných oblastiach.
Acknowledgment
Research results were obtained with the support of the Slovak National competence centre for HPC, the EuroCC 2 project and Slovak National Supercomputing Centre under grant agreement 101101903-EuroCC 2-DIGITAL-EUROHPC-JU-2022-NCC-01.
Computational resources were procured in the national project National competence centre for high performance computing (project code: 311070AKF2) funded by European Regional Development Fund, EU Structural Funds Informatization of society, Operational Program Integrated Infrastructure.
[1] Nicolas Girard, Dmitriy Smirnov, Justin Solomon, and Yuliya Tarabalka. “Polygonal Building Extraction by Frame Field Learning”. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (June 2021), pp. 5891-5900.
[2] L. Yang, L. Qi, L. Feng, W. Zhang, and Y. Shi. “Revisiting Weak-to-Strong Consistency in Semi-Supervised Semantic Segmentation”. In: 2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (June 2023), pp. 7236-7246. doi: 10.1109/CVPR52729.2023.00699.
[3] Kihyuk Sohn, David Berthelot, Chun-Liang Li, Zizhao Zhang, Nicholas Carlini, Ekin D. Cubuk, Alex Kurakin, Han Zhang, and Colin Raffel. “FixMatch: Simplifying Semi-Supervised Learning with Consistency and Confidence”. In: CoRR, vol. abs/2001.07685 (2020). Available: https://arxiv.org/abs/2001.07685.
[4] Emmanuel Maggiori, Yuliya Tarabalka, Guillaume Charpiat, and Pierre Alliez. “Can Semantic Labeling Methods Generalize to Any City? The Inria Aerial Image Labeling Benchmark”. In: IEEE International Geoscience and Remote Sensing Symposium (IGARSS) (2017). IEEE.
[5] Adrian Boguszewski, Dominik Batorski, Natalia Ziemba-Jankowska, Tomasz Dziedzic, and Anna Zambrzycka. “LandCover.ai: Dataset for Automatic Mapping of Buildings, Woodlands, Water and Roads from Aerial Imagery”. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops (June 2021), pp. 1102-1110.
[7] Stefano Zorzi, Shabab Bazrafkan, Stefan Habenschuss, and Friedrich Fraundorfer. “PolyWorld: Polygonal Building Extraction with Graph Neural Networks in Satellite Images”. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (2022), pp. 1848-1857.
Why Do Enterprises Need Supercomputers? NCC for HPC at the Innovation Seminar6 Mar-Dňa 6. marca sme sa zúčastnili odborno-informačného seminára s názvom „Podpora inovácií pre zlepšenie rastu a rozvoja malého a stredného podnikania v Nitrianskom kraji“. Podujatie organizovalo Regionálne centrum Nitra MIRRI SR v spolupráci s Nitrianskym samosprávnym krajom, Slovenskou poľnohospodárskou univerzitou v Nitre, Slovenskou inovačnou a energetickou agentúrou (SIEA) a Výskumnou a inovačnou autoritou (VAIA).
Join us for the HPC User Day 2025!3 Mar-Máte záujem o vysokovýkonné výpočty? Využívate superpočítače alebo sa chcete dozvedieť viac o ich možnostiach? Pridajte sa k nám na HPC User Day 2025! Toto podujatie je určené nielen pre skúsených používateľov, ale aj pre tých, ktorí sa o HPC technológie začínajú zaujímať.
Protect Your Data! Tomorrow We Celebrate Safer Internet Day10 Feb-11. februára si pripomíname Deň bezpečnejšieho internetu (Safer Internet Day, SID), globálnu iniciatívu zameranú na zvýšenie povedomia o kybernetickej bezpečnosti a ochrane osobných údajov. Tento deň každoročne zdôrazňuje dôležitosť bezpečnosti v online priestore a podporuje aktivity, ktoré pomáhajú chrániť najmä mladých používateľov internetu.
On June 10, representatives from national competence centres for HPC situated in the central Europe region met during the third Central European NCC working group. The hybrid event was organized by NCC Austria in Grundlsee. The workshop was attended by competence centers for HPC from Poland, Austria, Croatia, Czech Republic, Slovakia, Slovenia and Hungary.
The Competence Centers focused on the topic of collaboration between NCC and other institutions, as well as improving company profiles on LinkedIn. The event was opened by Markus Stöhr, head of the Austrian Competence Center for High-Performance Computing. This was followed by a brief introduction of each Competence Center, during which the heads of the respective NCCs presented not only the team members but also the most important topics they would like to address during the meeting. This was followed by a moderated discussion on the aforementioned topics led by Markus Stöhr and an overview and update on training cooperation led by Claudia Blaas-Schenner.
Another very interesting topic of the workshop was LinkedIn profile optimization. This internal workshop focused on best practices for creating a professional-looking LinkedIn page. The workshop was intended for anyone interested in improving their personal profile as well as their business page. The workshop was led by Natascha Trzepizur, a content marketing expert from INiTS in Vienna.
The final section, titled AI Focus, was dedicated to:
AI-Generated Reporting:A discussion on the use of artificial intelligence in report creation. Speakers: Markus Stöhr, Simeon Harrison, Thomas Mayerhofer.
Training series: AI for industry Discussion of a series of trainings focused on the use of AI in industry. Speakers: Simeon Harrison, Thomas Mayerhofer.
The meeting of the Central European NCC working group was a unique opportunity to exchange experiences, deepen cooperation and improve the professional competences of the participants. We thank the Austrian Competence Center for organizing this great event and we look forward to the next meeting of our working group!
NCC workgroup meetingCentral European NCCs workgroup meetingNCC for HPC meetingNCCs in GrundlseeWorkshop Central European NCCsNatascha Trzepizur
Why Do Enterprises Need Supercomputers? NCC for HPC at the Innovation Seminar6 Mar-Dňa 6. marca sme sa zúčastnili odborno-informačného seminára s názvom „Podpora inovácií pre zlepšenie rastu a rozvoja malého a stredného podnikania v Nitrianskom kraji“. Podujatie organizovalo Regionálne centrum Nitra MIRRI SR v spolupráci s Nitrianskym samosprávnym krajom, Slovenskou poľnohospodárskou univerzitou v Nitre, Slovenskou inovačnou a energetickou agentúrou (SIEA) a Výskumnou a inovačnou autoritou (VAIA).
Join us for the HPC User Day 2025!3 Mar-Máte záujem o vysokovýkonné výpočty? Využívate superpočítače alebo sa chcete dozvedieť viac o ich možnostiach? Pridajte sa k nám na HPC User Day 2025! Toto podujatie je určené nielen pre skúsených používateľov, ale aj pre tých, ktorí sa o HPC technológie začínajú zaujímať.
Protect Your Data! Tomorrow We Celebrate Safer Internet Day10 Feb-11. februára si pripomíname Deň bezpečnejšieho internetu (Safer Internet Day, SID), globálnu iniciatívu zameranú na zvýšenie povedomia o kybernetickej bezpečnosti a ochrane osobných údajov. Tento deň každoročne zdôrazňuje dôležitosť bezpečnosti v online priestore a podporuje aktivity, ktoré pomáhajú chrániť najmä mladých používateľov internetu.
We are pleased to announce a new partnership between the National Competence Centre for High-Performance Computing (NCC for HPC) and the Slovak Chamber of Commerce and Industry for Bratislava Region (SOPK BA). This alliance is part of the HPC Ambassador program and aims to support innovations and the adoption of HPC technologies among small and medium-sized enterprises (SMEs) in Slovakia.
Collaboration Mechanism
As part of this collaboration, NCC for HPC will regularly provide information about its activities, training sessions, and services relevant to the members of SOPK BA. SOPK BA will disseminate this information among its members and identify companies ready to implement HPC technologies, linking them with NCC for HPC. This process will enable businesses to receive expert assistance and support in various research and development projects.
Benefits for SMEs
The partnership offers numerous advantages for Slovak SMEs, including:
Access to powerful computational resources.
Organization of training sessions and informative lectures focused on HPC technologies.
Support in the implementation of HPC technologies in research and development.
Collaboration with experts to carry out "proof-of-concept" projects.
We look forward to a successful collaboration and many joint initiatives that will add value to the Slovak business environment.
Why Do Enterprises Need Supercomputers? NCC for HPC at the Innovation Seminar6 Mar-Dňa 6. marca sme sa zúčastnili odborno-informačného seminára s názvom „Podpora inovácií pre zlepšenie rastu a rozvoja malého a stredného podnikania v Nitrianskom kraji“. Podujatie organizovalo Regionálne centrum Nitra MIRRI SR v spolupráci s Nitrianskym samosprávnym krajom, Slovenskou poľnohospodárskou univerzitou v Nitre, Slovenskou inovačnou a energetickou agentúrou (SIEA) a Výskumnou a inovačnou autoritou (VAIA).
Join us for the HPC User Day 2025!3 Mar-Máte záujem o vysokovýkonné výpočty? Využívate superpočítače alebo sa chcete dozvedieť viac o ich možnostiach? Pridajte sa k nám na HPC User Day 2025! Toto podujatie je určené nielen pre skúsených používateľov, ale aj pre tých, ktorí sa o HPC technológie začínajú zaujímať.
Protect Your Data! Tomorrow We Celebrate Safer Internet Day10 Feb-11. februára si pripomíname Deň bezpečnejšieho internetu (Safer Internet Day, SID), globálnu iniciatívu zameranú na zvýšenie povedomia o kybernetickej bezpečnosti a ochrane osobných údajov. Tento deň každoročne zdôrazňuje dôležitosť bezpečnosti v online priestore a podporuje aktivity, ktoré pomáhajú chrániť najmä mladých používateľov internetu.
We are pleased to announce that the National Competence Centre for HPC has established a new partnership with the Union of Clusters of Slovakia (ÚKS) as part of the HPC Ambassador program This significant step will strengthen our joint efforts in promoting innovation and the adoption of high-performance computing technologies among clusters and their members from various sectors, with a focus on small and medium-sized enterprises in Slovakia.
Shared Goals and Visions
Our shared goal is to raise awareness and promote the adoption of HPC technologies among Slovak clusters, which bring together businesses, research institutions, and the academic sphere. In collaboration with ÚKS, we will organize events, training sessions, and informational campaigns that will provide cluster members with the necessary knowledge and tools to utilize HPC in their activities.
How We Collaborate
As part of this collaboration, the NCC will regularly share information about its activities, training sessions, and services that are relevant to the members and partners of ÚKS. ÚKS, in turn, will use its communication channels to inform its members about these opportunities and will connect businesses ready to utilize HPC technologies with the NCC. ÚKS members will also gain access to expert assistance and support for various research or development projects.
We believe that this partnership will significantly contribute to the development of the innovation ecosystem in Slovakia and help cluster members become more competitive in the global market.
We look forward to a successful collaboration and many joint projects ahead!
Why Do Enterprises Need Supercomputers? NCC for HPC at the Innovation Seminar6 Mar-Dňa 6. marca sme sa zúčastnili odborno-informačného seminára s názvom „Podpora inovácií pre zlepšenie rastu a rozvoja malého a stredného podnikania v Nitrianskom kraji“. Podujatie organizovalo Regionálne centrum Nitra MIRRI SR v spolupráci s Nitrianskym samosprávnym krajom, Slovenskou poľnohospodárskou univerzitou v Nitre, Slovenskou inovačnou a energetickou agentúrou (SIEA) a Výskumnou a inovačnou autoritou (VAIA).
Join us for the HPC User Day 2025!3 Mar-Máte záujem o vysokovýkonné výpočty? Využívate superpočítače alebo sa chcete dozvedieť viac o ich možnostiach? Pridajte sa k nám na HPC User Day 2025! Toto podujatie je určené nielen pre skúsených používateľov, ale aj pre tých, ktorí sa o HPC technológie začínajú zaujímať.
Protect Your Data! Tomorrow We Celebrate Safer Internet Day10 Feb-11. februára si pripomíname Deň bezpečnejšieho internetu (Safer Internet Day, SID), globálnu iniciatívu zameranú na zvýšenie povedomia o kybernetickej bezpečnosti a ochrane osobných údajov. Tento deň každoročne zdôrazňuje dôležitosť bezpečnosti v online priestore a podporuje aktivity, ktoré pomáhajú chrániť najmä mladých používateľov internetu.
V dňoch 21. a 22. mája 2024 sa v Bratislave uskutočnil ORCA workshop took place in Bratislava. This two-day workshop, organized by the national competence centers for HPC from Slovakia and Poland, focused on the ORCA quantum chemistry software package. Participants were introduced to the basics as well as selected advanced techniques of working with ORCA, with a significant portion of the workshop dedicated to practical exercises.
The workshop began with the registration and welcome of participants, followed by a presentation of the EuroCC 2competence center project. Instructor Klemens Noga from Cyfronet, the HPC center in Krakow, introduced the Polish HPC ecosystem and the opportunities available to users in the field of computational chemistry.
During the first day, the instructor focused on the practical aspects of setting up ORCA on HPC systems. Participants were introduced to the setup and operation of HPC clusters, best practices for installing ORCA, and the SLURM tool, which is used for managing computational tasks. They were familiarized with the structure and syntax of input files required for basic quantum chemical calculations, such as single-point energy calculations, property calculations, and geometry optimization. The workshop also covered output analysis, extraction of useful information, and visualization of results for better interpretation and presentation.
The second day began with more advanced computational options in ORCA. Participants learned how to set up calculations for vibrational frequencies, relativistic corrections, and spectroscopic properties. The instructor also focused on explaining scalability metrics, strategies for improving performance, and the efficiency of parallel computations on HPC systems, which the participants then practically tested. In the afternoon, participants worked on redox potential calculations and went through case studies prepared based on their own materials. The final part of the workshop dealt with troubleshooting calculations and sharing best practices to maximize the efficiency and accuracy of computations in ORCA.
The program also included a tour of the Devana supercomputer at the Computing Center of the Slovak Academy of Sciences. The event provided participants with fundamental knowledge and skills for working with the ORCA package in the field of quantum chemical computations on HPC systems. We believe the workshop was beneficial to the participants and assisted them in their work on their own research projects. The ORCA package is part of the software suite of the Devana system, and its license is freely available to all academic users.
Why Do Enterprises Need Supercomputers? NCC for HPC at the Innovation Seminar6 Mar-Dňa 6. marca sme sa zúčastnili odborno-informačného seminára s názvom „Podpora inovácií pre zlepšenie rastu a rozvoja malého a stredného podnikania v Nitrianskom kraji“. Podujatie organizovalo Regionálne centrum Nitra MIRRI SR v spolupráci s Nitrianskym samosprávnym krajom, Slovenskou poľnohospodárskou univerzitou v Nitre, Slovenskou inovačnou a energetickou agentúrou (SIEA) a Výskumnou a inovačnou autoritou (VAIA).
Join us for the HPC User Day 2025!3 Mar-Máte záujem o vysokovýkonné výpočty? Využívate superpočítače alebo sa chcete dozvedieť viac o ich možnostiach? Pridajte sa k nám na HPC User Day 2025! Toto podujatie je určené nielen pre skúsených používateľov, ale aj pre tých, ktorí sa o HPC technológie začínajú zaujímať.
Protect Your Data! Tomorrow We Celebrate Safer Internet Day10 Feb-11. februára si pripomíname Deň bezpečnejšieho internetu (Safer Internet Day, SID), globálnu iniciatívu zameranú na zvýšenie povedomia o kybernetickej bezpečnosti a ochrane osobných údajov. Tento deň každoročne zdôrazňuje dôležitosť bezpečnosti v online priestore a podporuje aktivity, ktoré pomáhajú chrániť najmä mladých používateľov internetu.
The course encompasses a comprehensive curriculum designed to cover the primary features of the Quantum ESPRESSOThe emphasis is on practical skill development. The course strikes a balance between theory and application, offering a hands-on learning experience. It caters to a beginner to intermediate level, aiming to equip participants with the fundamental knowledge and skills necessary for the effective utilization of QUANTUM ESPRESSO in their research and academic pursuits.
The school is designed for participants with a background in condensed matter physics or chemistry interested in learning to use Quantum ESPRESSO.
The school plans to cover the main features of the code and provide basic user skills such as compilation, simple scripting, choice of parallel options, and similar.
Why Do Enterprises Need Supercomputers? NCC for HPC at the Innovation Seminar6 Mar-Dňa 6. marca sme sa zúčastnili odborno-informačného seminára s názvom „Podpora inovácií pre zlepšenie rastu a rozvoja malého a stredného podnikania v Nitrianskom kraji“. Podujatie organizovalo Regionálne centrum Nitra MIRRI SR v spolupráci s Nitrianskym samosprávnym krajom, Slovenskou poľnohospodárskou univerzitou v Nitre, Slovenskou inovačnou a energetickou agentúrou (SIEA) a Výskumnou a inovačnou autoritou (VAIA).
Join us for the HPC User Day 2025!3 Mar-Máte záujem o vysokovýkonné výpočty? Využívate superpočítače alebo sa chcete dozvedieť viac o ich možnostiach? Pridajte sa k nám na HPC User Day 2025! Toto podujatie je určené nielen pre skúsených používateľov, ale aj pre tých, ktorí sa o HPC technológie začínajú zaujímať.
Protect Your Data! Tomorrow We Celebrate Safer Internet Day10 Feb-11. februára si pripomíname Deň bezpečnejšieho internetu (Safer Internet Day, SID), globálnu iniciatívu zameranú na zvýšenie povedomia o kybernetickej bezpečnosti a ochrane osobných údajov. Tento deň každoročne zdôrazňuje dôležitosť bezpečnosti v online priestore a podporuje aktivity, ktoré pomáhajú chrániť najmä mladých používateľov internetu.
Devana: Call for Standard HPC access projects 2/24
Výpočtové stredisko SAV a Národné superpočítačové centrum otvarujú druhú tohtoročnú Call for Projects for Standard Access to HPC 2/24. Projects are possible continuously, while there are 3 closing dates as standard during the year, after which the evaluation will take place until the submitted applications. It is possible to apply for access through the register.nscc.sk user portal register.nscc.sk .
Standard access to high-performance computing resources is open to all areas of science and research, especially for larger-scale projects. These projects should demonstrate excellence in the respective fields and a clear potential to bring innovative solutions to current social and technological challenges. In the application, it is necessary to demonstrate the efficiency and scalability of the proposed calculation strategies and methods in the HPC environment. The necessary data on the performance and parameters of the considered algorithms and applications can be obtained within the Testing Access.
Allocations are awarded for one (1) year with the option to apply for extension, if necessary. Access is free of charge, provided that all requirements defined in the Terms of reference are met. Submitted projects are evaluated from a technical point of view by the internal team of CC SAS and SK NSCC, and the quality of the scientific and research part is always evaluated by two independent external reviewers.
Call opening date: 6.5.2024 Call closing date: 31.5. 2024, 17:00 CET Communication of allocation decision: Up to 2 weeks from Call closing. Start of the allocation perion for awarded projects: no later than 15. 2. 2024
Eligible Researchers Scientists and researchers from Slovak public universities and the Slovak Academy of Sciences, as well as from public and state administration organizations and private enterprises registered in the Slovak Republic, can apply for standard access to HPC. Access is provided exclusively for civil and non-commercial open-science research and development. Interested parties from private companies should first contact the National Competence Centre for HPC.
Final report within 2 months from the end of the project.
Peer-review and other publications in domestic and foreign scientific periodicals with acknowledgments in the pre-defined wording, reported through the user portal.
Active participation in the Slovak HPC conference organized by the coordinator of this call (poster, other contribution).
Participation in dissemination activities of the coordinator (interview, article in the HPC magazine, etc.).