energy - RISC2 Project https://www.risc2-project.eu Tue, 31 Oct 2023 17:41:21 +0000 en-US hourly 1 https://wordpress.org/?v=6.6.2 RISC2’s partners gather in Brussels to reflect on three years of collaboration between EU and Latin America https://www.risc2-project.eu/2023/07/26/risc2s-partners-gather-in-brussels-to-reflect-on-three-years-of-collaboration-between-eu-and-latin-america/ Wed, 26 Jul 2023 12:03:56 +0000 https://www.risc2-project.eu/?p=2992 Over the past three years, the RISC2 project has established a network for the exchange of knowledge and experience that has enabled its European and Latin American partners to strengthen relations in HPC and take significant steps forward in this area. With the project quickly coming to an end, it was time to meet face-to-face […]

The post RISC2’s partners gather in Brussels to reflect on three years of collaboration between EU and Latin America first appeared on RISC2 Project.

]]>
Over the past three years, the RISC2 project has established a network for the exchange of knowledge and experience that has enabled its European and Latin American partners to strengthen relations in HPC and take significant steps forward in this area. With the project quickly coming to an end, it was time to meet face-to-face in Brussels to reflect on the progress and achievements, the goals set, the difficulties faced, and, above all, what can be expected for the future.

The session began with a welcome and introduction by Mateo Valero (BSC), one of the main drivers of this cooperation and a leading name in the field of HPC. This intervention was later complemented by Fabrizio Gagliardi (BSC). Afterward, Elsa Carvalho (INESC TEC) presented the work done in terms of communication by the RISC2 team, an important segment for all the news and achievements to reach all the partners and countries involved.

Carlos J. Barrios Hernandez then presented the work done within the HPC Observatory, a relevant source of information that European and Latin American research organizations can address with HPC and/or AI issues.

The session closed with an important and pertinent debate on how to strengthen cooperation in HPC between the European Union and Latin America, in which all participants contributed and gave their opinion, committing to efforts so that the work developed within the framework of RISC2 is continued.

What our partners had to say about the meeting?

Rafael Mayo Garcia, CIEMAT:

“The policy event organized by RISC2 in Brussels was of utmost importance for the development of HPC and digital capabilities for a shared infrastructure between EU and LAC. Even more, it has had crucial contributions to international entities such as CYTED, the Ibero-American Programme for the Development of Science and Technology. On the CIEMAT side, it has been a new step beyond for building and participating in a HPC shared ecosystem.”

Esteban Meneses, CeNAT:

“In Costa Rica, CeNAT plays a critical role in fostering technological change. To achieve that goal, it is fundamental to synchronize our efforts with other key players, particularly government institutions. The event policy in Brussels was a great opportunity to get closer to our science and technology ministry and start a dialogue on the importance of HPC, data science, and artificial intelligence for bringing about the societal changes we aim for.”

Esteban Mocskos, UBA:

“The Policy Event recently held in Brussels and organized by the RISC2 project had several remarkable points. The gathering of experts in HPC research and management in Latin America and Europe served to plan the next steps in the joint endeavor to deepen the collaboration in this field. The advance in management policies, application optimization, and user engagement are fundamental topics treated during the main sessions and also during the point-to-point talks in every corner of the meeting room.
I can say that this meeting will also spawn different paths in these collaboration efforts that we’ll surely see their results during the following years with a positive impact on both sides of this fruitful relationship: Latin America and Europe.”

Sergio Nesmachnow, Universidad de la República:

“The National Supercomputing Center (Uruguay) and Universidad de la República have led the development of HPC strategies and technologies and their application to relevant problems in Uruguay. Specific meetings such as the policy event organized by RISC2 in Brussels are key to present and disseminate the current developments and achievements to relevant political and technological leaders in our country, so that they gain knowledge about the usefulness of HPC technologies and infrastructure to foster the development of national scientific research in capital areas such as sustainability, energy, and social development. It was very important to present the network of collaborators in Latin America and Europe and to show the involvement of institutional and government agencies.

Within the contacts and talks during the organization of the meeting, we introduced the projecto to national authorities, including the National Director of Science and Technology, Ministry of Education and Culture, and the President of the National Agency for Research and Innovation, as well as the Uruguayan Agency for International Cooperation and academic authorities from all institutions involved in the National Supercomputing Center initiative. We hope the established contacts can result in productive joint efforts to foster the development of HPC and related scientific areas in our country and the region.”

Carla Osthoff, LNCC:

“In Brazil, LNCC is critical in providing High Performance Computing Resources for the Research Community and training Human Resources and fostering new technologies. The policy event organized by RISC2 in Brussels was fundamental to synchronizing LNCC efforts with other government institutions and international  entities. On the LNCC side, it has been a new step beyond building and participating in an HPC-shared ecosystem.

Specific meetings such as the policy event organized by RISC2 in Brussels  were very important to present the network of collaborators in Latin America and Europe and to show the involvement of institutional and government agencies.

As a result of joint activities in research and development in the areas of information and communication technologies (ICT), artificial intelligence, applied mathematics, and computational modelling, with emphasis on the areas of scientific computing and data science, a Memorandum of Understanding (MoU) have been signed between LNCC and Inria/France. As a  result of new joint activities, LNCC and INESC TEC/Portugal are starting  collaboration through INESC TEC International Visiting Researcher Programme 2023.”

The post RISC2’s partners gather in Brussels to reflect on three years of collaboration between EU and Latin America first appeared on RISC2 Project.

]]>
Scientific Machine Learning and HPC https://www.risc2-project.eu/2023/06/28/scientific-machine-learning-and-hpc/ Wed, 28 Jun 2023 08:24:28 +0000 https://www.risc2-project.eu/?p=2863 In recent years we have seen rapid growth in interest in artificial intelligence in general, and machine learning (ML) techniques, particularly in different branches of science and engineering. The rapid growth of the Scientific Machine Learning field derives from the combined development and use of efficient data analysis algorithms, the availability of data from scientific […]

The post Scientific Machine Learning and HPC first appeared on RISC2 Project.

]]>
In recent years we have seen rapid growth in interest in artificial intelligence in general, and machine learning (ML) techniques, particularly in different branches of science and engineering. The rapid growth of the Scientific Machine Learning field derives from the combined development and use of efficient data analysis algorithms, the availability of data from scientific instruments and computer simulations, and advances in high-performance computing. On May 25 2023, COPPE/UFRJ organized a forum to discuss Artificial Intelligence developments and its impact on the society [*].

As the coordinator of the High Performance Computing Center (Nacad) at COPPE/UFRJ, Alvaro Coutinho, presented advances in AI in Engineering and the importance of multidisciplinary research networks to address current issues in Scientific Machine Learning. Alvaro took the opportunity to highlight the need for Brazil to invest in high performance computing capacity.

The country’s sovereignty needs autonomy in producing ML advances, which depends on HPC support at the Universities and Research Centers. Brazil has nine machines in the Top 500 list of the most powerful computer systems in the world, but almost all at Petrobras company, and Universities need much more. ML is well-known to require HPC, when combined to scientific computer simulations it becomes essential.

The conventional notion of ML involves training an algorithm to automatically discover patterns, signals, or structures that may be hidden in huge databases and whose exact nature is unknown and therefore cannot be explicitly programmed. This field may face two major drawbacks: the need for a significant volume of (labelled) expensive to acquire data and limitations for extrapolating (making predictions beyond scenarios contained in the trained data difficult).

Considering that an algorithm’s predictive ability is a learning skill, current challenges must be addressed to improve the analytical and predictive capacity of Scientific ML algorithms, for example, to maximize its impact in applications of renewable energy. References [1-5] illustrate recent advances in Scientific Machine Learning in different areas of engineering and computer science.

References:

[*] https://www.coppe.ufrj.br/pt-br/planeta-coppe-noticias/noticias/coppe-e-sociedade-especialistas-debatem-os-reflexos-da-inteligencia

[1] Baker, Nathan, Steven L. Brunton, J. Nathan Kutz, Krithika Manohar, Aleksandr Y. Aravkin, Kristi Morgansen, Jennifer Klemisch, Nicholas Goebel, James Buttrick, Jeffrey Poskin, Agnes Blom-Schieber, Thomas Hogan, Darren McDonaldAlexander, Frank, Bremer, Timo, Hagberg, Aric, Kevrekidis, Yannis, Najm, Habib, Parashar, Manish, Patra, Abani, Sethian, James, Wild, Stefan, Willcox, Karen, and Lee, Steven. Workshop Report on Basic Research Needs for Scientific Machine Learning: Core Technologies for Artificial Intelligence. United States: N. p., 2019. Web. doi:10.2172/1478744.

[2] Brunton, Steven L., Bernd R. Noack, and Petros Koumoutsakos. “Machine learning for fluid mechanics.” Annual Review of Fluid Mechanics 52 (2020): 477-508.

[3] Karniadakis, George Em, et al. “Physics-informed machine learning.” Nature Reviews Physics 3.6 (2021): 422-440.

[4] Inria White Book on Artificial Intelligence: Current challenges and Inria’s engagement, 2nd edition, 2021. URL: https://www.inria.fr/en/white-paper-inria-artificial-intelligence

[5] Silva, Romulo, Umair bin Waheed, Alvaro Coutinho, and George Em Karniadakis. “Improving PINN-based Seismic Tomography by Respecting Physical Causality.” In AGU Fall Meeting Abstracts, vol. 2022, pp. S11C-09. 2022.

The post Scientific Machine Learning and HPC first appeared on RISC2 Project.

]]>
Future of EU-LATAM collaboration on HPC https://www.risc2-project.eu/events/future-of-eu-latam-collaboration-on-hpc/ Wed, 14 Jun 2023 15:21:48 +0000 https://www.risc2-project.eu/?post_type=mec-events&p=2943 The project RISC2 aims to strengthen collaboration between Latin America and the European Union in High-Performance Computing (HPC). One of the project’s main goals is to raise awareness among policymakers about the potential of international collaboration in HPC to tackle global challenges, such as climate change, health threats, and energy transition, and promote the exchange […]

The post Future of EU-LATAM collaboration on HPC first appeared on RISC2 Project.

]]>
The project RISC2 aims to strengthen collaboration between Latin America and the European Union in High-Performance Computing (HPC). One of the project’s main goals is to raise awareness among policymakers about the potential of international collaboration in HPC to tackle global challenges, such as climate change, health threats, and energy transition, and promote the exchange of best practices between research communities in both regions. Throughout the project lifecycle, the RISC2 Consortium has participated in policy dialogues, supported international training events, established an HPC Observatory and produced up-to-date reports that describe the HPC ecosystem in Latin America. The RISC2 partners are organizing an event in Brussels on 18 July 2023 to present these results to policymakers and foster a conversation about the roadmap for future bi-regional collaboration in the field of HPC.

Read more here.

Check the agenda:

The post Future of EU-LATAM collaboration on HPC first appeared on RISC2 Project.

]]>
Subsequent Progress And Challenges Concerning The México-UE Project ENERXICO: Supercomputing And Energy For México https://www.risc2-project.eu/2023/05/24/subsequent-progress-and-challenges-concerning-the-mexico-ue-project-enerxico-supercomputing-and-energy-for-mexico/ Wed, 24 May 2023 09:38:01 +0000 https://www.risc2-project.eu/?p=2824 In this short notice, we briefly describe some afterward advances and challenges with respect to two work packages developed in the ENERXICO Project. This opened the possibility of collaborating with colleagues from institutions that did not participate in the project, for example from the University of Santander in Colombia and from the University of Vigo […]

The post Subsequent Progress And Challenges Concerning The México-UE Project ENERXICO: Supercomputing And Energy For México first appeared on RISC2 Project.

]]>
In this short notice, we briefly describe some afterward advances and challenges with respect to two work packages developed in the ENERXICO Project. This opened the possibility of collaborating with colleagues from institutions that did not participate in the project, for example from the University of Santander in Colombia and from the University of Vigo in Spain. This exemplifies the importance of the RISC2 project in the sense that strengthening collaboration and finding joint research areas and HPC applied ventures is of great benefit for both: our Latin American Countries and the EU. We are now initiating talks to target several Energy related topics with some of the RISC2 partners. 

The ENERXICO Project focused on developing advanced simulation software solutions for oil & gas, wind energy and transportation powertrain industries.  The institutions that collaborated in the project are for México: ININ (Institution responsible for México), Centro de Investigación y de Estudios Avanzados del IPN (Cinvestav), Universidad Nacional Autónoma de México (UNAM IINGEN, FCUNAM), Universidad Autónoma Metropolitana-Azcapotzalco, Instituto Mexicano del Petróleo, Instituto Politécnico Nacional (IPN) and Pemex, and for the European Union: Centro de Supercómputo de Barcelona (Institution responsible for the EU), Technische Universitäts München, Alemania (TUM), Universidad de Grenoble Alpes, Francia (UGA), CIEMAT, España, Repsol, Iberdrola, Bull, Francia e Universidad Politécnica de Valencia, España.  

The Project contemplated four working packages (WP): 

WP1 Exascale Enabling: This was a cross-cutting work package that focused on assessing performance bottlenecks and improving the efficiency of the HPC codes proposed in vertical WP (UE Coordinator: BULL, MEX Coordinator: CINVESTAV-COMPUTACIÓN); 

WP2 Renewable energies:  This WP deployed new applications required to design, optimize and forecast the production of wind farms (UE Coordinator: IBR, MEX Coordinator: ININ); 

WP3 Oil and gas energies: This WP addressed the impact of HPC on the entire oil industry chain (UE Coordinator: REPSOL, MEX Coordinator: ININ); 

WP4 Biofuels for transport: This WP displayed advanced numerical simulations of biofuels under conditions similar to those of an engine (UE Coordinator: UPV-CMT, MEX Coordinator: UNAM); 

For WP1 the following codes were optimized for exascale computers: Alya, Bsit, DualSPHysics, ExaHyPE, Seossol, SEM46 and WRF.   

As an example, we present some of the results for the DualPHYysics code. We evaluated two architectures: The first set of hardware used were identical nodes, each equipped with 2 ”Intel Xeon Gold 6248 Processors”, clocking at 2.5 GHz with about 192 GB of system memory. Each node contained 4 Nvidia V100 Tesla GPUs with 32 GB of main memory each. The second set of hardware used were identical nodes, each equipped with 2 ”AMD Milan 7763 Processors”, clocking at 2.45 GHz with about 512 GB of system memory. Each node contained 4 Nvidia V100 Ampere GPUs with 40 GB of main memory each. The code was compiled and linked with CUDA 10.2 and OpenMPI 4. The application was executed using one GPU per MPI rank. 

In Figures 1 and 2 we show the scalability of the code for the strong and weak scaling tests that indicate that the scaling is very good. Motivated by these excellent results, we are in the process of performing in the LUMI supercomputer new SPH simulations with up to 26,834 million particles that will be run with up to 500 GPUs, which is 53.7 million particles per GPU. These simulations will be done initially for a Wave Energy Converter (WEC) Farm (see Figure 3), and later for turbulent models. 

Figure 1. Strong scaling test with a fix number of particles but increasing number of GPUs.

 

Figure 2. Weak scaling test with increasing number of particles and GPUs.

 

Figure 3. Wave Energy Converter (WEC) Farm (taken from https://corpowerocean.com/)

 

As part of WP3, ENERXICO developed a first version of a computer code called Black Hole (or BH code) for the numerical simulation of oil reservoirs, based on the numerical technique known as Smoothed Particle Hydrodynamics or SPH. This new code is an extension of the DualSPHysics code (https://dual.sphysics.org/) and is the first SPH based code that has been developed for the numerical simulation of oil reservoirs and has important benefits versus commercial codes based on other numerical techniques.  

The BH code is a large-scale massively parallel reservoir simulator capable of performing simulations with billions of “particles” or fluid elements that represent the system under study. It contains improved multi-physics modules that automatically combine the effects of interrelated physical and chemical phenomena to accurately simulate in-situ recovery processes. This has led to the development of a graphical user interface, considered as a multiple-platform application for code execution and visualization, and for carrying out simulations with data provided by industrial partners and performing comparisons with available commercial packages.  

Furthermore, a considerable effort is presently being made to simplify the process of setting up the input for reservoir simulations from exploration data by means of a workflow fully integrated in our industrial partners’ software environment.  A crucial part of the numerical simulations is the equation of state.  We have developed an equation of state based on crude oil data (the so-called PVT) in two forms, the first as a subroutine that is integrated into the code, and the second as an interpolation subroutine of properties’ tables that are generated from the equation of state subroutine.  

An oil reservoir is composed of a porous medium with a multiphase fluid made of oil, gas, rock and other solids. The aim of the code is to simulate fluid flow in a porous medium, as well as the behaviour of the system at different pressures and temperatures.  The tool should allow the reduction of uncertainties in the predictions that are carried out. For example, it may answer questions about the benefits of injecting a solvent, which could be CO2, nitrogen, combustion gases, methane, etc. into a reservoir, and the times of eruption of the gases in the production wells. With these estimates, it can take the necessary measures to mitigate their presence, calculate the expense, the pressure to be injected, the injection volumes and most importantly, where and for how long. The same happens with more complex processes such as those where fluids, air or steam are injected, which interact with the rock, oil, water and gas present in the reservoir. The simulator should be capable of monitoring and preparing measurement plans. 

In order to be able to perform a simulation of a reservoir oil field, an initial model needs to be created.  Using geophysical forward and inverse numerical techniques, the ENERXICO project evaluated novel, high-performance simulation packages for challenging seismic exploration cases that are characterized by extreme geometric complexity. Now, we are undergoing an exploration of high-order methods based upon fully unstructured tetrahedral meshes and also tree-structured Cartesian meshes with adaptive mesh refinement (AMR) for better spatial resolution. Using this methodology, our packages (and some commercial packages) together with seismic and geophysical data of naturally fractured reservoir oil fields, are able to create the geometry (see Figure 4), and exhibit basic properties of the oil reservoir field we want to study.  A number of numerical simulations are performed and from these oil fields exploitation scenarios are generated.

 

Figure 4. A detail of the initial model for a SPH simulation of a porous medium.

 

More information about the ENERXICO Project can be found in: https://enerxico-project.eu/

By: Jaime Klapp (ININ, México) and Isidoro Gitler (Cinvestav, México)

 

 

 

 

The post Subsequent Progress And Challenges Concerning The México-UE Project ENERXICO: Supercomputing And Energy For México first appeared on RISC2 Project.

]]>
Webinar: Improving energy-efficiency of High-Performance Computing clusters https://www.risc2-project.eu/events/webinar-7-improving-energy-efficiency-of-high-performance-computing-clusters/ Thu, 26 Jan 2023 13:37:07 +0000 https://www.risc2-project.eu/?post_type=mec-events&p=2666 Date: April 26, 2023 | 3 p.m. (UTC+1) Speakers: Lubomir Riha and Ondřej Vysocký, IT4Innovations National Supercomputing Center Moderator: Esteban Mocskos, Universidad de Buenos Aires High-Performance Computing centers consume megawatts of electrical power, which is a limiting factor in building bigger systems on the path to exascale and post-exascale clusters. Such high power consumption leads to several challenges […]

The post Webinar: Improving energy-efficiency of High-Performance Computing clusters first appeared on RISC2 Project.

]]>

Date: April 26, 2023 | 3 p.m. (UTC+1)

Speakers: Lubomir Riha and Ondřej Vysocký, IT4Innovations National Supercomputing Center

Moderator: Esteban Mocskos, Universidad de Buenos Aires

High-Performance Computing centers consume megawatts of electrical power, which is a limiting factor in building bigger systems on the path to exascale and post-exascale clusters. Such high power consumption leads to several challenges including robust power supply and its network, enormous energy bills, or significant CO2 emissions. To increase power efficiency, vendors accommodate various heterogeneous hardware that must be fully utilized by users’ applications, to be used efficiently. Such requirements may be hard to fulfill, which open a possibility of limiting the available resources for additional power and energy savings with no or small performance penalty.

The talk will present best practices on how to grant rights to control hardware parameters, how to measure the energy consumption of the hardware, and what can be expected from performing energy-saving activities based on hardware tuning.

About the speakers:

Lubomir Riha, Ph.D. is the Head of the Infrastructure Research Lab at IT4Innovations National Supercomputing Center. Previously he was a research scientist in the High-Performance Computing Lab at George Washington University, ECE Department. He received his Ph.D. degree in Electrical Engineering from the Czech Technical University in Prague, Czech Republic, and a Ph.D. degree in Computer Science from Bowie State University, USA. Currently, he is a local principal investigator of two EuroHPC Centers of Excellence: MAX and SPACE, and two EuroHPC projects: SCALABLE and EUPEX (designs a prototype of the European Exascale machine). Previously he was a local PI of the H2020 Center of Excellence POP2 and H2020-FET HPC READEX projects. His research interests are optimization of HPC applications, energy-efficient computing, acceleration of scientific and engineering applications using GPU and many-core accelerators, and parallel and distributed rendering.

Ondrej Vysocky is a Ph.D. candidate at VSB – Technical University of Ostrava, Czech Republic and at the same time he works at IT4Innovations in Infrastructure Research Lab. His research is focused on energy efficiency in high-performance computing. He was an investigator of the Horizon 2020 READEX project which dealt with the energy efficiency of parallel applications using dynamic tuning. Since that time, he develops a MERIC library, a runtime system for energy measurement and hardware parameters tuning during a parallel application run. Using the library he is an investigator of several H2020 projects including Performance Optimisation and Productivity (POP2), or European Pilot for Exascale (EUPEX). He is also a member of the PowerStack initiative, which works on a holistic, extensible, and scalable approach of power management.

The post Webinar: Improving energy-efficiency of High-Performance Computing clusters first appeared on RISC2 Project.

]]>
RISC2 webinar series aims to benefit HPC research and industry in Europe and Latin America https://www.risc2-project.eu/2023/01/26/risc2-webinar-season-is-back-for-season-2/ Thu, 26 Jan 2023 13:32:50 +0000 https://www.risc2-project.eu/?p=2657 After the success of the first 4 webinars, the RISC2 Webinar Series “HPC System & Tools” is back for its 2nd season. The webinars will be happening until May 2023, starting on February 22. In each webinar, it will be presented the state-of-the-art in methods and tools for setting-up and maintaining HPC hardware and software infrastructures. […]

The post RISC2 webinar series aims to benefit HPC research and industry in Europe and Latin America first appeared on RISC2 Project.

]]>
After the success of the first 4 webinars, the RISC2 Webinar Series “HPC System & Tools” is back for its 2nd season. The webinars will be happening until May 2023, starting on February 22.

In each webinar, it will be presented the state-of-the-art in methods and tools for setting-up and maintaining HPC hardware and software infrastructures. The duration of each talk will be around 30-40 minutes, followed by a 10–15-minute moderated discussion with the audience.

There are already 4 webinars scheduled:

 

 

 

The post RISC2 webinar series aims to benefit HPC research and industry in Europe and Latin America first appeared on RISC2 Project.

]]>
JUPITER Ascending – First European Exascale Supercomputer Coming to Jülich https://www.risc2-project.eu/2023/01/02/jupiter-ascending-first-european-exascale-supercomputer-coming-to-julich/ Mon, 02 Jan 2023 12:14:22 +0000 https://www.risc2-project.eu/?p=2637 It was finally decided in 2022: Forschungszentrum Jülich will be home to Europe’s first exascale computer. The supercomputer is set to be the first in Europe to surpass the threshold of one trillion (“1” followed by 18 zeros) calculations per second. The system will be acquired by the European supercomputing initiative EuroHPC JU. The exascale computer […]

The post JUPITER Ascending – First European Exascale Supercomputer Coming to Jülich first appeared on RISC2 Project.

]]>
It was finally decided in 2022: Forschungszentrum Jülich will be home to Europe’s first exascale computer. The supercomputer is set to be the first in Europe to surpass the threshold of one trillion (“1” followed by 18 zeros) calculations per second. The system will be acquired by the European supercomputing initiative EuroHPC JU. The exascale computer should help to solve important and urgent scientific questions regarding, for example, climate change, how to combat pandemics, and sustainable energy production, while also enabling the intensive use of artificial intelligence and the analysis of large data volumes. The overall costs for the system amount to 500 million euros. Of this total, 250 million euros is being provided by EuroHPC JU and a further 250 million euros in equal parts by the German Federal Ministry of Education and Research (BMBF) and the Ministry of Culture and Science of the State of North Rhine-Westphalia (MKW NRW).

The computer named JUPITER (short for “Joint Undertaking Pioneer for Innovative and Transformative Exascale Research”) will be installed 2023/2024 on the campus of Forschungszentrum Jülich. It is intended that the system will be operated by the Jülich Supercomputing Centre (JSC), whose supercomputers JUWELS and JURECA currently rank among the most powerful in the world. JSC has participated in the application procedure for a high-end supercomputer as a member of the Gauss Centre for Supercomputing (GCS), an association of the three German national supercomputing centres JSC in Jülich, High Performance Computing Stuttgart (HLRS), and Leibniz Computing Centre (LRZ) in Garching. The competition was organized by the European supercomputing initiative EuroHPC JU, which was formed by the European Union together with European countries and private companies. 

JUPITER is now set to become the first European supercomputer to make the leap into the exascale class. In terms of computing power, it will be more powerful that 5 million modern laptops of PCs. Just like Jülich’s current supercomputer JUWELS, JUPITER will be based on a dynamic, modular supercomputing architecture, which Forschungszentrum Jülich developed together with European and international partners in the EU’s DEEP research projects.

In a modular supercomputer, various computing modules are coupled together. This enables program parts of complex simulations to be distributed over several modules, ensuring that the various hardware properties can be optimally utilized in each case. Its modular construction also means that the system is well prepared for integrating future technologies such as quantum computing or neurotrophic modules, which emulate the neural structure of a biological brain.

Figure Modular Supercomputing Architecture: Computing and storage modules of the exascale computer in its basis configuration (blue) as well as optional modules (green) and modules for future technologies (purple) as possible extensions. 

In its basis configuration, JUPITER will have and enormously powerful booster module with highly efficient GPU-based computation accelerators. Massively parallel applications are accelerated by this booster in a similar way to a turbocharger, for example to calculate high-resolution climate models, develop new materials, simulate complex cell processes and energy systems, advanced basic research, or train next-generation, computationally intensive machine-learning algorithms.

One major challenge is the energy that is required for such large computing power. The average power is anticipated to be up to 15 megawatts. JUPITER has been designed as a “green” supercomputer and will be powered by green electricity. The envisaged warm water cooling system should help to ensure that JUPITER achieves the highest efficiency values. At the same time, the cooling technology opens up the possibility of intelligently using the waste heat  that is produced. For example, just like its predecessor system JUWELS, JUPITER will be connected to the new low-temperature network on the Forschungszentrum Jülich campus. Further potential applications for the waste heat from JUPITER are currently being investigated by Forschungszentrum Jülich.

By Jülich Supercomputing Centre (JSC)

 

The first image is JUWELS: Germany’s fastest supercomputer JUWELS at Forschungszentrum Jülich, which is funded in equal parts by the Federal Ministry of Education and Research (BMBF) and the Ministry of Culture and Science of the State of North Rhine-Westphalia (MKW NRW) via the Gauss Centre for Supercomputing (GCS). (Copyright: Forschungszentrum Jülich / Sascha Kreklau)

The post JUPITER Ascending – First European Exascale Supercomputer Coming to Jülich first appeared on RISC2 Project.

]]>
Supercomputing as a great opportunity for the clean energy transition https://www.risc2-project.eu/2022/07/25/supercomputing-as-a-great-opportunity-for-the-clean-energy-transition/ Mon, 25 Jul 2022 08:15:30 +0000 https://www.risc2-project.eu/?p=2223 Given the current trend of the EU political agenda in the energy sector linking their strategy to accelerate decarbonization with the adoption of digital technologies, it is easy to deduce that supercomputing is a great opportunity for the clean energy transition in Europe (even beyond the current crisis caused by the invasion of Ukraine by […]

The post Supercomputing as a great opportunity for the clean energy transition first appeared on RISC2 Project.

]]>
Given the current trend of the EU political agenda in the energy sector linking their strategy to accelerate decarbonization with the adoption of digital technologies, it is easy to deduce that supercomputing is a great opportunity for the clean energy transition in Europe (even beyond the current crisis caused by the invasion of Ukraine by Russia). However, while Europe is working towards a decarbonized energy ecosystem, with a clear vision and targets set by the European Green Deal, it is also recognized that energy domain scientists are not realizing the full potential that HPC-powered simulations can offer. This situation is a result of the lack of HPC-related experience available to scientists. To this end, different organizations and projects such as RISC2 are working so that a wide range of scientists in the energy domain can access the considerable experience accumulated through previous collaborations with supercomputing specialists.

In accordance with all of the above, it seems appropriate to consider the launch of coordination actions between both domains at both the European and Latin American levels to support the development of adjusted data models and simulation codes for energy thematic areas, while making use of the latest technology in HPC.

With different countries and regions in the world trying to win the race for the development of the most efficient Computing and Artificial Intelligence technology, it seems equally logical to support the development of high-performance computing research infrastructure. Scientific communities can now access the most powerful computing resources and use them to run simulations focused on energy challenges. Simulation enables planning and working towards tomorrow’s clean energy sources in a digital framework, greatly reducing prototyping costs and waste.

As examples, there are several fields in which the advances that the exploitation of digital methodologies (HPC jointly with data science and artificial intelligence) can bring will produce the resolution of more ambitious problems in the energy sector:

  • Improvement in the exploitation of energy sources
    • Weather forecast or turbines in off-and on-shore wind energy
    • Design of new devices such as wind turbines, solar thermal plants, collectors, solid state batteries, etc
    • Computational Fluid Dynamics (CFD) analysis of heat transfer between solar radiation, materials, and fluids
  • Design of advanced materials of energy interest
    • Materials for innovative batteries via accurate molecular dynamics and/or ab initio simulations to design and characterize at the atomic-scale new cathodes and electrolytes
    • Materials for photovoltaic devices via multiscale simulations where atomic-scale ab-initio simulations are combined with mesoscale approaches to design efficient energy harvesting devices
  • Energy distribution
    • Integrated energy system analysis, optimization of the energy mix and smart grids fed with renewable energies, and further distribution in the electricity grid
    • Economic energy models
    • Smart meters and sensor deployment and further application to energy efficiency in buildings, smart cities, etc Exploitation on additional infrastructures such as fog/edge computing
    • New actors (prosumers) in a distributed electricity market, including energy systems integration

Behind all the previous items, there is a solid track of research that forms the foundations of this effort in reinforcing research on digital topics. Some examples are:

  • Design of Digitalized Intelligent Energy Systems, for example, their application to cities in which zero-emissions buildings or intelligent power systems are pursued
  • Deeper understanding of the physics behind the energy sources, for example, multiscale simulation or of the atmospheric flow for wind farm operation through CFD–RANS or LES simulation coupled to mesoscale models taking advantage of the capabilities offered by exascale computing
  • New designs of Fluids Structure Interactions (FSI), for example, for full rotor simulations coupled to Computational Fluid Dynamics (CFD) simulations. Structural dynamics (fatigue) in different devices
  • Optimization of codes by the means of new mathematical kernels, not simply computational porting
  • Integration of different computing platforms seamlessly combining HPC, HTC, and High-Performance Data Analytics methodologies

Moreover, the advanced modeling of energy systems is allowed thanks to the tight synergy with other disciplines from mathematics to computer science, from data science to numerical analysis. Among the other high-end modeling requires:

  • Data Science, as handlina g large volume of data is key to energy-focused HPC and HTC simulations and data-driven workflow
  • Designs customized machine and deep learning techniques for improved artificial intelligence approaches
  • Efficient implementation of digital platforms, their interconnections and interoperability

 

By CIEMAT

The post Supercomputing as a great opportunity for the clean energy transition first appeared on RISC2 Project.

]]>
EU-LATAM research collaboration to encourage regional research ecosystems https://www.risc2-project.eu/2022/07/05/eu-latam-research-collaboration-to-encourage-regional-research-ecosystems/ Tue, 05 Jul 2022 14:37:34 +0000 https://www.risc2-project.eu/?p=2196 The European Union and Latin American countries have a long history of research collaboration. Programmes like Horizon Europe and Horizon 2020 and funding schemes such as European Research Council (ERC) grants and Marie Sklodowska-Curie actions have fostered research projects by individual scientists and consortia formed by partners from both regions. In addition, Latin American countries […]

The post EU-LATAM research collaboration to encourage regional research ecosystems first appeared on RISC2 Project.

]]>
The European Union and Latin American countries have a long history of research collaboration. Programmes like Horizon Europe and Horizon 2020 and funding schemes such as European Research Council (ERC) grants and Marie Sklodowska-Curie actions have fostered research projects by individual scientists and consortia formed by partners from both regions.

In addition, Latin American countries have signed bilateral science and technology (S&T) agreements with the European Commission (EC). Within this framework, both parties participate in setting common principles, goals and conditions necessary to ensure a level playing field for researchers from both sides of the Atlantic. Brazil and Mexico recently signed new bilateral S&T agreements with the EC in November 2021 and March 2022, respectively. Co-funding schemes signed by the EC and their counterparts (Conacyt in Mexico; CNPq, FINEP and CONFAP in Brazil) support national partners participating in successful Horizon Europe projects.

These initiatives aim to encourage participation from Latin American entities in Horizon Europe calls, strengthening bilateral relations between the EU and Latin American countries, especially in research topics with a supranational scope (climate change, disease prevention, and renewable energies, for example).

A successful example of bilateral co-funding was the project ENERXICO (2019-2021), which received grants from Horizon 2020 and the Mexican Department of Energy (CONACYT-SENER Hidrocarburos).

ENERXICO applied HPC techniques to energy industry simulations of critical interest to Mexico: oil & gas industry in upstream, midstream and downstream problems, wind energy industry and combustion efficiency for transportation. The main objectives of the project were:

  • Develop beyond state-of-the-art high-performance simulation tools for the energy industry
  • Increase the oil & gas reserves using geophysical exploration for subsalt reservoirs
  • Improve refining and transport efficiency of heavy oil
  • Develop a robust wind energy sector to mitigate oil dependency
  • Improve fuel generation using biofuels

The consortium, coordinated by the Barcelona Supercomputing Center and the Mexican National Institute for Nuclear Research (ININ), included stakeholders from academia and the energy industry from the EU and Mexico.

Strengthening international cooperation goes beyond improving bilateral relations; S&T collaboration between regions can encourage increased multilateral research ecosystems within these regions, helping bridge the gaps in the Latin American scientific field due to the absence of a common regional funding source.

 

By Barcelona Supercomputing Center

The post EU-LATAM research collaboration to encourage regional research ecosystems first appeared on RISC2 Project.

]]>
CLUSTER UY https://www.risc2-project.eu/2022/04/22/cluster-uy/ Fri, 22 Apr 2022 10:13:10 +0000 https://www.risc2-project.eu/?p=1910 Title: CLUSTER UY System name: CLUSTER UY Location: National Supercomputing Center –Datacenter Ing. José Luis Massera – Antel Web OS: Linux CentOS 7 Country: Uruguay Processor architecture: 1216 CPU computing cores (1120 Intel Xeon-Gold 6138 2.00GHz cores and 96 AMD EPYC 7642 2.30GHz cores). 3.8 TB of RAM 28 Nvidia Tesla P100 GPU cards with […]

The post CLUSTER UY first appeared on RISC2 Project.

]]>
  • Title: CLUSTER UY
  • System name: CLUSTER UY
  • Location: National Supercomputing Center –Datacenter Ing. José Luis Massera – Antel
  • Web
  • OS: Linux CentOS 7
  • Country: Uruguay
  • Processor architecture:
    • 1216 CPU computing cores (1120 Intel Xeon-Gold 6138 2.00GHz cores and 96 AMD EPYC 7642 2.30GHz cores).
    • 3.8 TB of RAM
    • 28 Nvidia Tesla P100 GPU cards with 12Gb of memory (a total number of 100352 GPU cores).
  • Vendor: N/D
  • Peak performance:
    • 327 Tflops
  • Access Policy
  • Main research domains: Astronomy, Bioinformatics, Biology, Computer graphics, Computer Sciences, Data analysis, Energy, Engineering, Geoinformatics, Mathematics, Optimization, Physics, Social Sciences, Statistics
  • The post CLUSTER UY first appeared on RISC2 Project.

    ]]>