parallel computing - RISC2 Project https://www.risc2-project.eu Fri, 01 Sep 2023 13:49:12 +0000 en-US hourly 1 https://wordpress.org/?v=6.6.2 Subsequent Progress And Challenges Concerning The México-UE Project ENERXICO: Supercomputing And Energy For México https://www.risc2-project.eu/2023/05/24/subsequent-progress-and-challenges-concerning-the-mexico-ue-project-enerxico-supercomputing-and-energy-for-mexico/ Wed, 24 May 2023 09:38:01 +0000 https://www.risc2-project.eu/?p=2824 In this short notice, we briefly describe some afterward advances and challenges with respect to two work packages developed in the ENERXICO Project. This opened the possibility of collaborating with colleagues from institutions that did not participate in the project, for example from the University of Santander in Colombia and from the University of Vigo […]

The post Subsequent Progress And Challenges Concerning The México-UE Project ENERXICO: Supercomputing And Energy For México first appeared on RISC2 Project.

]]>
In this short notice, we briefly describe some afterward advances and challenges with respect to two work packages developed in the ENERXICO Project. This opened the possibility of collaborating with colleagues from institutions that did not participate in the project, for example from the University of Santander in Colombia and from the University of Vigo in Spain. This exemplifies the importance of the RISC2 project in the sense that strengthening collaboration and finding joint research areas and HPC applied ventures is of great benefit for both: our Latin American Countries and the EU. We are now initiating talks to target several Energy related topics with some of the RISC2 partners. 

The ENERXICO Project focused on developing advanced simulation software solutions for oil & gas, wind energy and transportation powertrain industries.  The institutions that collaborated in the project are for México: ININ (Institution responsible for México), Centro de Investigación y de Estudios Avanzados del IPN (Cinvestav), Universidad Nacional Autónoma de México (UNAM IINGEN, FCUNAM), Universidad Autónoma Metropolitana-Azcapotzalco, Instituto Mexicano del Petróleo, Instituto Politécnico Nacional (IPN) and Pemex, and for the European Union: Centro de Supercómputo de Barcelona (Institution responsible for the EU), Technische Universitäts München, Alemania (TUM), Universidad de Grenoble Alpes, Francia (UGA), CIEMAT, España, Repsol, Iberdrola, Bull, Francia e Universidad Politécnica de Valencia, España.  

The Project contemplated four working packages (WP): 

WP1 Exascale Enabling: This was a cross-cutting work package that focused on assessing performance bottlenecks and improving the efficiency of the HPC codes proposed in vertical WP (UE Coordinator: BULL, MEX Coordinator: CINVESTAV-COMPUTACIÓN); 

WP2 Renewable energies:  This WP deployed new applications required to design, optimize and forecast the production of wind farms (UE Coordinator: IBR, MEX Coordinator: ININ); 

WP3 Oil and gas energies: This WP addressed the impact of HPC on the entire oil industry chain (UE Coordinator: REPSOL, MEX Coordinator: ININ); 

WP4 Biofuels for transport: This WP displayed advanced numerical simulations of biofuels under conditions similar to those of an engine (UE Coordinator: UPV-CMT, MEX Coordinator: UNAM); 

For WP1 the following codes were optimized for exascale computers: Alya, Bsit, DualSPHysics, ExaHyPE, Seossol, SEM46 and WRF.   

As an example, we present some of the results for the DualPHYysics code. We evaluated two architectures: The first set of hardware used were identical nodes, each equipped with 2 ”Intel Xeon Gold 6248 Processors”, clocking at 2.5 GHz with about 192 GB of system memory. Each node contained 4 Nvidia V100 Tesla GPUs with 32 GB of main memory each. The second set of hardware used were identical nodes, each equipped with 2 ”AMD Milan 7763 Processors”, clocking at 2.45 GHz with about 512 GB of system memory. Each node contained 4 Nvidia V100 Ampere GPUs with 40 GB of main memory each. The code was compiled and linked with CUDA 10.2 and OpenMPI 4. The application was executed using one GPU per MPI rank. 

In Figures 1 and 2 we show the scalability of the code for the strong and weak scaling tests that indicate that the scaling is very good. Motivated by these excellent results, we are in the process of performing in the LUMI supercomputer new SPH simulations with up to 26,834 million particles that will be run with up to 500 GPUs, which is 53.7 million particles per GPU. These simulations will be done initially for a Wave Energy Converter (WEC) Farm (see Figure 3), and later for turbulent models. 

Figure 1. Strong scaling test with a fix number of particles but increasing number of GPUs.

 

Figure 2. Weak scaling test with increasing number of particles and GPUs.

 

Figure 3. Wave Energy Converter (WEC) Farm (taken from https://corpowerocean.com/)

 

As part of WP3, ENERXICO developed a first version of a computer code called Black Hole (or BH code) for the numerical simulation of oil reservoirs, based on the numerical technique known as Smoothed Particle Hydrodynamics or SPH. This new code is an extension of the DualSPHysics code (https://dual.sphysics.org/) and is the first SPH based code that has been developed for the numerical simulation of oil reservoirs and has important benefits versus commercial codes based on other numerical techniques.  

The BH code is a large-scale massively parallel reservoir simulator capable of performing simulations with billions of “particles” or fluid elements that represent the system under study. It contains improved multi-physics modules that automatically combine the effects of interrelated physical and chemical phenomena to accurately simulate in-situ recovery processes. This has led to the development of a graphical user interface, considered as a multiple-platform application for code execution and visualization, and for carrying out simulations with data provided by industrial partners and performing comparisons with available commercial packages.  

Furthermore, a considerable effort is presently being made to simplify the process of setting up the input for reservoir simulations from exploration data by means of a workflow fully integrated in our industrial partners’ software environment.  A crucial part of the numerical simulations is the equation of state.  We have developed an equation of state based on crude oil data (the so-called PVT) in two forms, the first as a subroutine that is integrated into the code, and the second as an interpolation subroutine of properties’ tables that are generated from the equation of state subroutine.  

An oil reservoir is composed of a porous medium with a multiphase fluid made of oil, gas, rock and other solids. The aim of the code is to simulate fluid flow in a porous medium, as well as the behaviour of the system at different pressures and temperatures.  The tool should allow the reduction of uncertainties in the predictions that are carried out. For example, it may answer questions about the benefits of injecting a solvent, which could be CO2, nitrogen, combustion gases, methane, etc. into a reservoir, and the times of eruption of the gases in the production wells. With these estimates, it can take the necessary measures to mitigate their presence, calculate the expense, the pressure to be injected, the injection volumes and most importantly, where and for how long. The same happens with more complex processes such as those where fluids, air or steam are injected, which interact with the rock, oil, water and gas present in the reservoir. The simulator should be capable of monitoring and preparing measurement plans. 

In order to be able to perform a simulation of a reservoir oil field, an initial model needs to be created.  Using geophysical forward and inverse numerical techniques, the ENERXICO project evaluated novel, high-performance simulation packages for challenging seismic exploration cases that are characterized by extreme geometric complexity. Now, we are undergoing an exploration of high-order methods based upon fully unstructured tetrahedral meshes and also tree-structured Cartesian meshes with adaptive mesh refinement (AMR) for better spatial resolution. Using this methodology, our packages (and some commercial packages) together with seismic and geophysical data of naturally fractured reservoir oil fields, are able to create the geometry (see Figure 4), and exhibit basic properties of the oil reservoir field we want to study.  A number of numerical simulations are performed and from these oil fields exploitation scenarios are generated.

 

Figure 4. A detail of the initial model for a SPH simulation of a porous medium.

 

More information about the ENERXICO Project can be found in: https://enerxico-project.eu/

By: Jaime Klapp (ININ, México) and Isidoro Gitler (Cinvestav, México)

 

 

 

 

The post Subsequent Progress And Challenges Concerning The México-UE Project ENERXICO: Supercomputing And Energy For México first appeared on RISC2 Project.

]]>
More than 100 students participated in the HPC, Data & Architecture Week https://www.risc2-project.eu/2023/03/21/more-than-100-students-participated-in-the-hpc-data-architecture-week/ Tue, 21 Mar 2023 10:18:44 +0000 https://www.risc2-project.eu/?p=2790 RISC2 supported the ‘HPC, Data & Architecture Week’, which took place between March 13 and 17, 2023, in Buenos Aires. This initiative aimed to recover and deepen the training of human resources for the development of scientific applications and their efficient use in parallel computing environments. This event had four main courses: “Foundations of Parallel […]

The post More than 100 students participated in the HPC, Data & Architecture Week first appeared on RISC2 Project.

]]>
RISC2 supported the ‘HPC, Data & Architecture Week’, which took place between March 13 and 17, 2023, in Buenos Aires. This initiative aimed to recover and deepen the training of human resources for the development of scientific applications and their efficient use in parallel computing environments.

This event had four main courses: “Foundations of Parallel Programming”, “Large scale data processing and machine learning”, “New architectures and specific computing platforms”, and “Administrations techniques for large-scale computing facilities”.

More than 100 students actively participated in the event who traveled from different part of the country. 30 students received financial support to participate (traveling and living) provided by the National HPC System (SNCAD) dependent of the Argentina’s Ministry of Science.

Esteban Mocskos, one of the organizers of the event, believes “this kind of events should be organized regularly to sustain the flux of students in the area of HPC”. In his opinion, “a lot of students from Argentina get their first contact with HPC topics. As such a large country, impacting a distant region also means impacting the neighboring countries. Those students will bring their experience to other students in their places”. According to Mocskos, initiatives like the “HPC, Data & Architecture Week” spark a lot of collaborations.

The post More than 100 students participated in the HPC, Data & Architecture Week first appeared on RISC2 Project.

]]>
14th International SuperComputing Camp 2023 https://www.risc2-project.eu/events/14th-international-supercomputing-camp-2023/ Mon, 27 Feb 2023 12:31:53 +0000 https://www.risc2-project.eu/?post_type=mec-events&p=2763

The post 14th International SuperComputing Camp 2023 first appeared on RISC2 Project.

]]>

The post 14th International SuperComputing Camp 2023 first appeared on RISC2 Project.

]]>
Costa Rica HPC School 2023 aimed at teaching the fundamental tools and methodologies in parallel programming https://www.risc2-project.eu/2023/02/14/costa-rica-hpc-school-2023-aimed-at-teaching-the-fundamental-tools-and-methodologies-in-parallel-programming/ Tue, 14 Feb 2023 10:05:55 +0000 https://www.risc2-project.eu/?p=2736 The Costa Rica HPC School 2023, organized by CeNAT in collaboration with the RISC2 project, took place between January 30 and February 3, at the Costa Rica National High Technology Center. The main goal of the School was to offer a platform for learning the fundamental tools and methodologies in parallel programming. In doing so […]

The post Costa Rica HPC School 2023 aimed at teaching the fundamental tools and methodologies in parallel programming first appeared on RISC2 Project.

]]>
The Costa Rica HPC School 2023, organized by CeNAT in collaboration with the RISC2 project, took place between January 30 and February 3, at the Costa Rica National High Technology Center. The main goal of the School was to offer a platform for learning the fundamental tools and methodologies in parallel programming. In doing so in an in-person mode, networking and team building was also fostered. The School gathered 32 attendees, mostly students, but also professors and researchers.

Building on the success of previous editions, the seventh installment of the Costa Rica High Performance Computing School (CRHPCS) aims at preparing students and researchers to introduce HPC tools in their workflows. A selected team of international experts taught sessions on shared-memory programming, distributed-memory programming, accelerator programming, and high performance computing.  This edition had instructors Alessandro Marani and Nitin Shukla from CINECA, which greatly helped in bringing a vibrant environment to the sessions.

Bernd Mohr, from Jülich Supercomputing Centre, was the Keynote Speaker of this year’s edition of the event.  A well-known figure in the HPC community at large, Bernd presented the talk Parallel Performance Analysis at Scale: From Single Node to one Million HPC Cores. In an amazing voyage through different architecture setups, Bernd highlighted the importance and challenges of performance analysis.

For Esteban Meneses, Costa Rica HPC School General Chair, the School is a key element in building a stronger and more connected HPC community in the region. This year, thanks to the RISC2 project, we were able to gather participants from Guatemala, El Salvador, and Colombia. Creating these ties is fundamental for later developing more complex initiatives. We aim at preparing future scientists that will develop groundbreaking computer applications that tackle the most pressing problems of our region.

More information here. 

The post Costa Rica HPC School 2023 aimed at teaching the fundamental tools and methodologies in parallel programming first appeared on RISC2 Project.

]]>
LNCC’s HPC Summer School provided sessions related to HPC to their community https://www.risc2-project.eu/2023/01/30/lnccs-hpc-summer-school-provided-sessions-related-to-hpc-to-their-community/ Mon, 30 Jan 2023 11:31:48 +0000 https://www.risc2-project.eu/?p=2688 LNCC, one of the RISC2 Brazilian partners, organized the HPC Summer School “Escola Supercomputador Santos Dumont,” which took place between January 16 to 24, 2023, as part of the LNCC’s Summer Program. The School aimed to provide mini-courses and talks related to programming on high-performance computers, such as parallel programming models, profiling tools, and libraries […]

The post LNCC’s HPC Summer School provided sessions related to HPC to their community first appeared on RISC2 Project.

]]>
LNCC, one of the RISC2 Brazilian partners, organized the HPC Summer School “Escola Supercomputador Santos Dumont,” which took place between January 16 to 24, 2023, as part of the LNCC’s Summer Program.

The School aimed to provide mini-courses and talks related to programming on high-performance computers, such as parallel programming models, profiling tools, and libraries for developing optimized parallel algorithms for the SDumont user community and the high-performance computing programming community.

Due to the extensive territory of Brazil and the number of research projects, it is mandatory to provide regular HPC schools for the research community. According to Carla Osthoff, one of the organizers of this school, “SDumont is the only Brazilian supercomputer dedicated to the research community that is part of the TOP 500 list. The Brazilian Ministry of Science and Technology offers free access to all Brazilian research projects in the country and foreign collaborators. Currently, we have 238 research projects from 18 research areas.  This edition of the School received 350 registrations, but we also provided online YouTube access to the community.”

The event happened remotely, and all the sessions are available on Youtube.

The post LNCC’s HPC Summer School provided sessions related to HPC to their community first appeared on RISC2 Project.

]]>
Webinar: Improving energy-efficiency of High-Performance Computing clusters https://www.risc2-project.eu/events/webinar-7-improving-energy-efficiency-of-high-performance-computing-clusters/ Thu, 26 Jan 2023 13:37:07 +0000 https://www.risc2-project.eu/?post_type=mec-events&p=2666 Date: April 26, 2023 | 3 p.m. (UTC+1) Speakers: Lubomir Riha and Ondřej Vysocký, IT4Innovations National Supercomputing Center Moderator: Esteban Mocskos, Universidad de Buenos Aires High-Performance Computing centers consume megawatts of electrical power, which is a limiting factor in building bigger systems on the path to exascale and post-exascale clusters. Such high power consumption leads to several challenges […]

The post Webinar: Improving energy-efficiency of High-Performance Computing clusters first appeared on RISC2 Project.

]]>

Date: April 26, 2023 | 3 p.m. (UTC+1)

Speakers: Lubomir Riha and Ondřej Vysocký, IT4Innovations National Supercomputing Center

Moderator: Esteban Mocskos, Universidad de Buenos Aires

High-Performance Computing centers consume megawatts of electrical power, which is a limiting factor in building bigger systems on the path to exascale and post-exascale clusters. Such high power consumption leads to several challenges including robust power supply and its network, enormous energy bills, or significant CO2 emissions. To increase power efficiency, vendors accommodate various heterogeneous hardware that must be fully utilized by users’ applications, to be used efficiently. Such requirements may be hard to fulfill, which open a possibility of limiting the available resources for additional power and energy savings with no or small performance penalty.

The talk will present best practices on how to grant rights to control hardware parameters, how to measure the energy consumption of the hardware, and what can be expected from performing energy-saving activities based on hardware tuning.

About the speakers:

Lubomir Riha, Ph.D. is the Head of the Infrastructure Research Lab at IT4Innovations National Supercomputing Center. Previously he was a research scientist in the High-Performance Computing Lab at George Washington University, ECE Department. He received his Ph.D. degree in Electrical Engineering from the Czech Technical University in Prague, Czech Republic, and a Ph.D. degree in Computer Science from Bowie State University, USA. Currently, he is a local principal investigator of two EuroHPC Centers of Excellence: MAX and SPACE, and two EuroHPC projects: SCALABLE and EUPEX (designs a prototype of the European Exascale machine). Previously he was a local PI of the H2020 Center of Excellence POP2 and H2020-FET HPC READEX projects. His research interests are optimization of HPC applications, energy-efficient computing, acceleration of scientific and engineering applications using GPU and many-core accelerators, and parallel and distributed rendering.

Ondrej Vysocky is a Ph.D. candidate at VSB – Technical University of Ostrava, Czech Republic and at the same time he works at IT4Innovations in Infrastructure Research Lab. His research is focused on energy efficiency in high-performance computing. He was an investigator of the Horizon 2020 READEX project which dealt with the energy efficiency of parallel applications using dynamic tuning. Since that time, he develops a MERIC library, a runtime system for energy measurement and hardware parameters tuning during a parallel application run. Using the library he is an investigator of several H2020 projects including Performance Optimisation and Productivity (POP2), or European Pilot for Exascale (EUPEX). He is also a member of the PowerStack initiative, which works on a holistic, extensible, and scalable approach of power management.

The post Webinar: Improving energy-efficiency of High-Performance Computing clusters first appeared on RISC2 Project.

]]>
Webinar: Developing complex workflows that include HPC, Artificial Intelligence and Data Analytics https://www.risc2-project.eu/events/webinar-5-developing-complex-workflows-that-include-hpc-artificial-intelligence-and-data-analytics/ Tue, 24 Jan 2023 10:51:32 +0000 https://www.risc2-project.eu/?post_type=mec-events&p=2661 Date: February 22, 2023 | 4 p.m. (UTC) Speaker: Rosa M. Badia, Barcelona Supercomputing Center Moderator: Esteban Mocskos, Universidad de Buenos Aires The evolution of High-Performance Computing (HPC) systems towards every-time more complex machines is opening the opportunity of hosting larger and heterogeneous applications. In this sense, the demand for developing applications that are not purely HPC, but […]

The post Webinar: Developing complex workflows that include HPC, Artificial Intelligence and Data Analytics first appeared on RISC2 Project.

]]>

Date: February 22, 2023 | 4 p.m. (UTC)

Speaker: Rosa M. Badia, Barcelona Supercomputing Center

Moderator: Esteban Mocskos, Universidad de Buenos Aires

The evolution of High-Performance Computing (HPC) systems towards every-time more complex machines is opening the opportunity of hosting larger and heterogeneous applications. In this sense, the demand for developing applications that are not purely HPC, but that combine aspects of Artifical Intelligence and or Data analytics is becoming more common. However, there is a lack of environments that support the development of these complex workflows. The webinar will present PyCOMPSs, a parallel task-based programming in Python. Based on simple annotations, sequential Python programs can be executed in parallel in HPC-clusters and other distributed infrastructures.

PyCOMPSs has been extended to support tasks that invoke HPC applications and can be combined with Artificial Intelligence and Data analytics frameworks.

Some of these extensions are made in the framework of the eFlows4HPC project, which in addition is developing the HPC Workflows as a Service (HPCWaaS) methodology to make the development, deployment, execution and reuse of workflows easier. The webinar will present the current status of the PyCOMPSs programming model and how it is being extended in the eFlows4HPC project towards the project needs. Also, the HPCWaaS methodology will be introduced

About the speaker: Rosa M. Badia holds a PhD on Computer Science (1994) from the Technical University of Catalonia (UPC).  She is the manager of the Workflows and Distributed Computing research group at the Barcelona Supercomputing Center (BSC).

Her current research interests are programming models for complex platforms (from edge, fog, to Clouds and large HPC systems).  The group led by Dr. Badia has been developing StarSs programming model for more than 15 years, with a high success in adoption by application developers. Currently the group focuses its efforts in PyCOMPSs/COMPSs, an instance of the programming model for distributed computing including Cloud.

Dr Badia has published nearly 200 papers in international conferences and journals in the topics of her research. Her group is very active in projects funded by the European Commission and in contracts with industry. Dr Badia is the PI of the eFlows4HPC project.

Registrations are now closed.

 

The post Webinar: Developing complex workflows that include HPC, Artificial Intelligence and Data Analytics first appeared on RISC2 Project.

]]>
JUPITER Ascending – First European Exascale Supercomputer Coming to Jülich https://www.risc2-project.eu/2023/01/02/jupiter-ascending-first-european-exascale-supercomputer-coming-to-julich/ Mon, 02 Jan 2023 12:14:22 +0000 https://www.risc2-project.eu/?p=2637 It was finally decided in 2022: Forschungszentrum Jülich will be home to Europe’s first exascale computer. The supercomputer is set to be the first in Europe to surpass the threshold of one trillion (“1” followed by 18 zeros) calculations per second. The system will be acquired by the European supercomputing initiative EuroHPC JU. The exascale computer […]

The post JUPITER Ascending – First European Exascale Supercomputer Coming to Jülich first appeared on RISC2 Project.

]]>
It was finally decided in 2022: Forschungszentrum Jülich will be home to Europe’s first exascale computer. The supercomputer is set to be the first in Europe to surpass the threshold of one trillion (“1” followed by 18 zeros) calculations per second. The system will be acquired by the European supercomputing initiative EuroHPC JU. The exascale computer should help to solve important and urgent scientific questions regarding, for example, climate change, how to combat pandemics, and sustainable energy production, while also enabling the intensive use of artificial intelligence and the analysis of large data volumes. The overall costs for the system amount to 500 million euros. Of this total, 250 million euros is being provided by EuroHPC JU and a further 250 million euros in equal parts by the German Federal Ministry of Education and Research (BMBF) and the Ministry of Culture and Science of the State of North Rhine-Westphalia (MKW NRW).

The computer named JUPITER (short for “Joint Undertaking Pioneer for Innovative and Transformative Exascale Research”) will be installed 2023/2024 on the campus of Forschungszentrum Jülich. It is intended that the system will be operated by the Jülich Supercomputing Centre (JSC), whose supercomputers JUWELS and JURECA currently rank among the most powerful in the world. JSC has participated in the application procedure for a high-end supercomputer as a member of the Gauss Centre for Supercomputing (GCS), an association of the three German national supercomputing centres JSC in Jülich, High Performance Computing Stuttgart (HLRS), and Leibniz Computing Centre (LRZ) in Garching. The competition was organized by the European supercomputing initiative EuroHPC JU, which was formed by the European Union together with European countries and private companies. 

JUPITER is now set to become the first European supercomputer to make the leap into the exascale class. In terms of computing power, it will be more powerful that 5 million modern laptops of PCs. Just like Jülich’s current supercomputer JUWELS, JUPITER will be based on a dynamic, modular supercomputing architecture, which Forschungszentrum Jülich developed together with European and international partners in the EU’s DEEP research projects.

In a modular supercomputer, various computing modules are coupled together. This enables program parts of complex simulations to be distributed over several modules, ensuring that the various hardware properties can be optimally utilized in each case. Its modular construction also means that the system is well prepared for integrating future technologies such as quantum computing or neurotrophic modules, which emulate the neural structure of a biological brain.

Figure Modular Supercomputing Architecture: Computing and storage modules of the exascale computer in its basis configuration (blue) as well as optional modules (green) and modules for future technologies (purple) as possible extensions. 

In its basis configuration, JUPITER will have and enormously powerful booster module with highly efficient GPU-based computation accelerators. Massively parallel applications are accelerated by this booster in a similar way to a turbocharger, for example to calculate high-resolution climate models, develop new materials, simulate complex cell processes and energy systems, advanced basic research, or train next-generation, computationally intensive machine-learning algorithms.

One major challenge is the energy that is required for such large computing power. The average power is anticipated to be up to 15 megawatts. JUPITER has been designed as a “green” supercomputer and will be powered by green electricity. The envisaged warm water cooling system should help to ensure that JUPITER achieves the highest efficiency values. At the same time, the cooling technology opens up the possibility of intelligently using the waste heat  that is produced. For example, just like its predecessor system JUWELS, JUPITER will be connected to the new low-temperature network on the Forschungszentrum Jülich campus. Further potential applications for the waste heat from JUPITER are currently being investigated by Forschungszentrum Jülich.

By Jülich Supercomputing Centre (JSC)

 

The first image is JUWELS: Germany’s fastest supercomputer JUWELS at Forschungszentrum Jülich, which is funded in equal parts by the Federal Ministry of Education and Research (BMBF) and the Ministry of Culture and Science of the State of North Rhine-Westphalia (MKW NRW) via the Gauss Centre for Supercomputing (GCS). (Copyright: Forschungszentrum Jülich / Sascha Kreklau)

The post JUPITER Ascending – First European Exascale Supercomputer Coming to Jülich first appeared on RISC2 Project.

]]>
HPC Summer School “Escola Supercomputador Santos Dumont 2023” https://www.risc2-project.eu/events/hpc-summer-school-escola-supercomputador-santos-dumont-2023/ Mon, 19 Dec 2022 10:38:22 +0000 https://www.risc2-project.eu/?post_type=mec-events&p=2622 LNCC is organizing the HPC summer school “Escola Supercomputador Santos Dumont 2023”. The School will take place from January 16th-24th, in remote format, and is open to the HPC research community. The school aims to provide the SDumont user community and the high performance computing programming community in general with mini-courses related to programming on […]

The post HPC Summer School “Escola Supercomputador Santos Dumont 2023” first appeared on RISC2 Project.

]]>
LNCC is organizing the HPC summer school “Escola Supercomputador Santos Dumont 2023”. The School will take place from January 16th-24th, in remote format, and is open to the HPC research community.

The school aims to provide the SDumont user community and the high performance computing programming community in general with mini-courses related to programming on high performance computers such as parallel programming models, profiling tools and libraries for the development of optimized parallel algorithms.

Know more. 

The post HPC Summer School “Escola Supercomputador Santos Dumont 2023” first appeared on RISC2 Project.

]]>
Costa Rica HPC School 2023 https://www.risc2-project.eu/events/costa-rica-hpc-school-2023/ Wed, 09 Nov 2022 16:33:56 +0000 https://www.risc2-project.eu/?post_type=mec-events&p=2547

The post Costa Rica HPC School 2023 first appeared on RISC2 Project.

]]>

The post Costa Rica HPC School 2023 first appeared on RISC2 Project.

]]>