advanced computing - RISC2 Project https://www.risc2-project.eu Wed, 06 Sep 2023 10:42:35 +0000 en-US hourly 1 https://wordpress.org/?v=6.6.2 Subsequent Progress And Challenges Concerning The México-UE Project ENERXICO: Supercomputing And Energy For México https://www.risc2-project.eu/2023/05/24/subsequent-progress-and-challenges-concerning-the-mexico-ue-project-enerxico-supercomputing-and-energy-for-mexico/ Wed, 24 May 2023 09:38:01 +0000 https://www.risc2-project.eu/?p=2824 In this short notice, we briefly describe some afterward advances and challenges with respect to two work packages developed in the ENERXICO Project. This opened the possibility of collaborating with colleagues from institutions that did not participate in the project, for example from the University of Santander in Colombia and from the University of Vigo […]

The post Subsequent Progress And Challenges Concerning The México-UE Project ENERXICO: Supercomputing And Energy For México first appeared on RISC2 Project.

]]>
In this short notice, we briefly describe some afterward advances and challenges with respect to two work packages developed in the ENERXICO Project. This opened the possibility of collaborating with colleagues from institutions that did not participate in the project, for example from the University of Santander in Colombia and from the University of Vigo in Spain. This exemplifies the importance of the RISC2 project in the sense that strengthening collaboration and finding joint research areas and HPC applied ventures is of great benefit for both: our Latin American Countries and the EU. We are now initiating talks to target several Energy related topics with some of the RISC2 partners. 

The ENERXICO Project focused on developing advanced simulation software solutions for oil & gas, wind energy and transportation powertrain industries.  The institutions that collaborated in the project are for México: ININ (Institution responsible for México), Centro de Investigación y de Estudios Avanzados del IPN (Cinvestav), Universidad Nacional Autónoma de México (UNAM IINGEN, FCUNAM), Universidad Autónoma Metropolitana-Azcapotzalco, Instituto Mexicano del Petróleo, Instituto Politécnico Nacional (IPN) and Pemex, and for the European Union: Centro de Supercómputo de Barcelona (Institution responsible for the EU), Technische Universitäts München, Alemania (TUM), Universidad de Grenoble Alpes, Francia (UGA), CIEMAT, España, Repsol, Iberdrola, Bull, Francia e Universidad Politécnica de Valencia, España.  

The Project contemplated four working packages (WP): 

WP1 Exascale Enabling: This was a cross-cutting work package that focused on assessing performance bottlenecks and improving the efficiency of the HPC codes proposed in vertical WP (UE Coordinator: BULL, MEX Coordinator: CINVESTAV-COMPUTACIÓN); 

WP2 Renewable energies:  This WP deployed new applications required to design, optimize and forecast the production of wind farms (UE Coordinator: IBR, MEX Coordinator: ININ); 

WP3 Oil and gas energies: This WP addressed the impact of HPC on the entire oil industry chain (UE Coordinator: REPSOL, MEX Coordinator: ININ); 

WP4 Biofuels for transport: This WP displayed advanced numerical simulations of biofuels under conditions similar to those of an engine (UE Coordinator: UPV-CMT, MEX Coordinator: UNAM); 

For WP1 the following codes were optimized for exascale computers: Alya, Bsit, DualSPHysics, ExaHyPE, Seossol, SEM46 and WRF.   

As an example, we present some of the results for the DualPHYysics code. We evaluated two architectures: The first set of hardware used were identical nodes, each equipped with 2 ”Intel Xeon Gold 6248 Processors”, clocking at 2.5 GHz with about 192 GB of system memory. Each node contained 4 Nvidia V100 Tesla GPUs with 32 GB of main memory each. The second set of hardware used were identical nodes, each equipped with 2 ”AMD Milan 7763 Processors”, clocking at 2.45 GHz with about 512 GB of system memory. Each node contained 4 Nvidia V100 Ampere GPUs with 40 GB of main memory each. The code was compiled and linked with CUDA 10.2 and OpenMPI 4. The application was executed using one GPU per MPI rank. 

In Figures 1 and 2 we show the scalability of the code for the strong and weak scaling tests that indicate that the scaling is very good. Motivated by these excellent results, we are in the process of performing in the LUMI supercomputer new SPH simulations with up to 26,834 million particles that will be run with up to 500 GPUs, which is 53.7 million particles per GPU. These simulations will be done initially for a Wave Energy Converter (WEC) Farm (see Figure 3), and later for turbulent models. 

Figure 1. Strong scaling test with a fix number of particles but increasing number of GPUs.

 

Figure 2. Weak scaling test with increasing number of particles and GPUs.

 

Figure 3. Wave Energy Converter (WEC) Farm (taken from https://corpowerocean.com/)

 

As part of WP3, ENERXICO developed a first version of a computer code called Black Hole (or BH code) for the numerical simulation of oil reservoirs, based on the numerical technique known as Smoothed Particle Hydrodynamics or SPH. This new code is an extension of the DualSPHysics code (https://dual.sphysics.org/) and is the first SPH based code that has been developed for the numerical simulation of oil reservoirs and has important benefits versus commercial codes based on other numerical techniques.  

The BH code is a large-scale massively parallel reservoir simulator capable of performing simulations with billions of “particles” or fluid elements that represent the system under study. It contains improved multi-physics modules that automatically combine the effects of interrelated physical and chemical phenomena to accurately simulate in-situ recovery processes. This has led to the development of a graphical user interface, considered as a multiple-platform application for code execution and visualization, and for carrying out simulations with data provided by industrial partners and performing comparisons with available commercial packages.  

Furthermore, a considerable effort is presently being made to simplify the process of setting up the input for reservoir simulations from exploration data by means of a workflow fully integrated in our industrial partners’ software environment.  A crucial part of the numerical simulations is the equation of state.  We have developed an equation of state based on crude oil data (the so-called PVT) in two forms, the first as a subroutine that is integrated into the code, and the second as an interpolation subroutine of properties’ tables that are generated from the equation of state subroutine.  

An oil reservoir is composed of a porous medium with a multiphase fluid made of oil, gas, rock and other solids. The aim of the code is to simulate fluid flow in a porous medium, as well as the behaviour of the system at different pressures and temperatures.  The tool should allow the reduction of uncertainties in the predictions that are carried out. For example, it may answer questions about the benefits of injecting a solvent, which could be CO2, nitrogen, combustion gases, methane, etc. into a reservoir, and the times of eruption of the gases in the production wells. With these estimates, it can take the necessary measures to mitigate their presence, calculate the expense, the pressure to be injected, the injection volumes and most importantly, where and for how long. The same happens with more complex processes such as those where fluids, air or steam are injected, which interact with the rock, oil, water and gas present in the reservoir. The simulator should be capable of monitoring and preparing measurement plans. 

In order to be able to perform a simulation of a reservoir oil field, an initial model needs to be created.  Using geophysical forward and inverse numerical techniques, the ENERXICO project evaluated novel, high-performance simulation packages for challenging seismic exploration cases that are characterized by extreme geometric complexity. Now, we are undergoing an exploration of high-order methods based upon fully unstructured tetrahedral meshes and also tree-structured Cartesian meshes with adaptive mesh refinement (AMR) for better spatial resolution. Using this methodology, our packages (and some commercial packages) together with seismic and geophysical data of naturally fractured reservoir oil fields, are able to create the geometry (see Figure 4), and exhibit basic properties of the oil reservoir field we want to study.  A number of numerical simulations are performed and from these oil fields exploitation scenarios are generated.

 

Figure 4. A detail of the initial model for a SPH simulation of a porous medium.

 

More information about the ENERXICO Project can be found in: https://enerxico-project.eu/

By: Jaime Klapp (ININ, México) and Isidoro Gitler (Cinvestav, México)

 

 

 

 

The post Subsequent Progress And Challenges Concerning The México-UE Project ENERXICO: Supercomputing And Energy For México first appeared on RISC2 Project.

]]>
Developing Efficient Scientific Gateways for Bioinformatics in Supercomputer Environments Supported by Artificial Intelligence https://www.risc2-project.eu/2023/03/20/developing-efficient-scientific-gateways-for-bioinformatics-in-supercomputer-environments-supported-by-artificial-intelligence/ Mon, 20 Mar 2023 09:37:46 +0000 https://www.risc2-project.eu/?p=2781 Scientific gateways bring enormous benefits to end users by simplifying access and hiding the complexity of the underlying distributed computing infrastructure. Gateways require significant development and maintenance efforts. BioinfoPortal[1], through its CSGrid[2]  middleware, takes advantage of Santos Dumont [3] heterogeneous resources. However, task submission still requires a substantial step regarding deciding the best configuration that […]

The post Developing Efficient Scientific Gateways for Bioinformatics in Supercomputer Environments Supported by Artificial Intelligence first appeared on RISC2 Project.

]]>
Scientific gateways bring enormous benefits to end users by simplifying access and hiding the complexity of the underlying distributed computing infrastructure. Gateways require significant development and maintenance efforts. BioinfoPortal[1], through its CSGrid[2]  middleware, takes advantage of Santos Dumont [3] heterogeneous resources. However, task submission still requires a substantial step regarding deciding the best configuration that leads to efficient execution. This project aims to develop green and intelligent scientific gateways for BioinfoPortal supported by high-performance computing environments (HPC) and specialised technologies such as scientific workflows, data mining, machine learning, and deep learning. The efficient analysis and interpretation of Big Data opens new challenges to explore molecular biology, genetics, biomedical, and healthcare to improve personalised diagnostics and therapeutics; finding new avenues to deal with this massive amount of information becomes necessary. New Bioinformatics and Computational Biology paradigms drive storage, management, and data access. HPC and Big Data advanced in this domain represent a vast new field of opportunities for bioinformatics researchers and a significant challenge. the BioinfoPortal science gateway is a multiuser Brazilian infrastructure. We present several challenges for efficiently executing applications and discuss the findings on improving the use of computational resources. We performed several large-scale bioinformatics experiments that are considered computationally intensive and time-consuming. We are currently coupling artificial intelligence to generate models to analyze computational and bioinformatics metadata to understand how automatic learning can predict computational resources’ efficient use. The computational executions are conducted at Santos Dumont, the largest supercomputer in Latin America, dedicated to the research community with 5.1 Petaflops and 36,472 computational cores distributed in 1,134 computational nodes.

By:

Carneiro, B. Fagundes, C. Osthoff, G. Freire, K. Ocaña, L. Cruz, L. Gadelha, M. Coelho, M. Galheigo, and R. Terra are with the National Laboratory of Scientific Computing, Rio de Janeiro, Brazil.

Carvalho is with the Federal Center for Technological Education Celso Suckow da Fonseca, Rio de Janeiro, Brazil.

Douglas Cardoso is with the Polytechnic Institute of Tomar, Portugal.

Boito and L, Teylo is with the University of Bordeaux, CNRS, Bordeaux INP, INRIA, LaBRI, Talence, France.

Navaux is with the Informatics Institute, the Federal University of Rio Grande do Sul, and Rio Grande do Sul, Brazil.

References:

Ocaña, K. A. C. S.; Galheigo, M.; Osthoff, C.; Gadelha, L. M. R.; Porto, F.; Gomes, A. T. A.; Oliveira, D.; Vasconcelos, A. T. BioinfoPortal: A scientific gateway for integrating bioinformatics applications on the Brazilian national high-performance computing network. Future Generation Computer Systems, v. 107, p. 192-214, 2020.

Mondelli, M. L.; Magalhães, T.; Loss, G.; Wilde, M.; Foster, I.; Mattoso, M. L. Q.; Katz, D. S.; Barbosa, H. J. C.; Vasconcelos, A. T. R.; Ocaña, K. A. C. S; Gadelha, L. BioWorkbench: A High-Performance Framework for Managing and Analyzing Bioinformatics Experiments. PeerJ, v. 1, p. 1, 2018.

Coelho, M.; Freire, G.; Ocaña, K.; Osthoff, C.; Galheigo, M.; Carneiro, A. R.; Boito, F.; Navaux, P.; Cardoso, D. O. Desenvolvimento de um Framework de Aprendizado de Máquina no Apoio a Gateways Científicos Verdes, Inteligentes e Eficientes: BioinfoPortal como Caso de Estudo Brasileiro In: XXIII Simpósio em Sistemas Computacionais de Alto Desempenho – WSCAD 2022 (https://wscad.ufsc.br/), 2022.

Terra, R.; Ocaña, K.; Osthoff, C.; Cruz, L.; Boito, F.; Navaux, P.; Carvalho, D. Framework para a Construção de Redes Filogenéticas em Ambiente de Computação de Alto Desempenho. In: XXIII Simpósio em Sistemas Computacionais de Alto Desempenho – WSCAD 2022 (https://wscad.ufsc.br/), 2022.

Ocaña, K.; Cruz, L.; Coelho, M.; Terra, R.; Galheigo, M.; Carneiro, A.; Carvalho, D.; Gadelha, L.; Boito, F.; Navaux, P.; Osthoff, C. ParslRNA-Seq: an efficient and scalable RNAseq analysis workflow for studies of differentiated gene expression. In: Latin America High-Performance Computing Conference (CARLA), 2022, Rio Grande do Sul, Brazil. Proceedings of the Latin American High-Performance Computing Conference – CARLA 2022 (http://www.carla22.org/), 2022.

[1] https://bioinfo.lncc.br/

[2] https://git.tecgraf.puc-rio.br/csbase-dev/csgrid/-/tree/CSGRID-2.3-LNCC

[3] https://https://sdumont.lncc.br

The post Developing Efficient Scientific Gateways for Bioinformatics in Supercomputer Environments Supported by Artificial Intelligence first appeared on RISC2 Project.

]]>
14th International SuperComputing Camp 2023 https://www.risc2-project.eu/events/14th-international-supercomputing-camp-2023/ Mon, 27 Feb 2023 12:31:53 +0000 https://www.risc2-project.eu/?post_type=mec-events&p=2763

The post 14th International SuperComputing Camp 2023 first appeared on RISC2 Project.

]]>

The post 14th International SuperComputing Camp 2023 first appeared on RISC2 Project.

]]>
Mapping human brain functions using HPC https://www.risc2-project.eu/2023/02/01/mapping-human-brain-functions-using-hpc/ Wed, 01 Feb 2023 13:17:19 +0000 https://www.risc2-project.eu/?p=2697 ContentMAP is the first Portuguese project in the field of Psychology and Cognitive Neuroscience to be awarded with European Research Council grant (ERC Starting Grant #802553). In this project one is mapping how the human brain represents object knowledge – for example, how one represents in the brain all one knows about a knife (that […]

The post Mapping human brain functions using HPC first appeared on RISC2 Project.

]]>
ContentMAP is the first Portuguese project in the field of Psychology and Cognitive Neuroscience to be awarded with European Research Council grant (ERC Starting Grant #802553). In this project one is mapping how the human brain represents object knowledge – for example, how one represents in the brain all one knows about a knife (that it cuts, that it has a handle, that is made out of metal and plastic or metal and wood, that it has a serrated and sharp part, that it is smooth and cold, etc.)? To do this, the project collects numerous MRI images while participants see and interact with objects (fMRI). HPC (High Performance Computing) is of central importance for processing these images . The use of HPC has allowed to manipulate these data, perform analysis with machine learning and complex computing in a timely manner.

Humans are particularly efficient at recognising objects – think about what surrounds us: one recognises the object where one is reading the text from as a screen, the place where one sits as a chair, the utensil in which one drinks coffee as a cup, and one does all of this extremely quickly and virtually automatically. One is able to do all this despite the fact that 1) one holds large amounts of information about each object (if one is asked to write down everything you know about a pen, you would certainly have a lot to say); and that 2) there are several exemplars of each object type (a glass can be tall, made out of glass, metal, paper or plastic, it can be different colours, etc. – but despite that, any of them would still be a glass). How does one do this? How one is able to store and process so much information in the process of recognising a glass, and generalise all the different instances of a glass to get the concept “glass”? The goal of the ContentMAP is to understand the processes that lead to successful object recognition.

The answer to these question lies in better understanding of the organisational principles of information in the brain. It is, in fact, the efficient organisation of conceptual information and object representations in the brain that allows one to quickly and efficiently recognise the keyboard that is in front of each of us. To study the neuronal organisation of object knowledge, the project collects large sets of fMRI data from several participants, and then try to decode the organisational principles of information in the brain.

Given the amount of data and the computational requirements of this type of data at the level of pre-processing and post processing, the use of HPC is essential to enable these studies to be conducted in a timely manner. For example, at the post-processing level, the project uses whole brain Support Vector Machine classification algorithms (searchlight procedures) that require hundreds of thousands of classifiers to be trained. Moreover, for each of these classifiers one needs to compute a sample distribution of the average, as well as test the various classifications of interest, and this has to be done per participant.

Because of this, the use of HPC facilities of of the Advanced Computing Laboratory (LCA) at University of Coimbra is crucial. It allows us to actually perform these analyses in one to two weeks – something that on our 14-core computers would take a few months, which in pratice would mean, most probably, that the analysis would not be done. 

By Faculty of Psychology and Educational Sciences, University of Coimbra

 

Reference 

ProAction Lab http://proactionlab.fpce.uc.pt/ 

The post Mapping human brain functions using HPC first appeared on RISC2 Project.

]]>
Webinar: Addressing the challenges of scientific visualization in the exascale age https://www.risc2-project.eu/events/webinar-addressing-the-challenges-of-scientific-visualization-in-the-exascale-age/ Tue, 24 Jan 2023 10:56:42 +0000 https://www.risc2-project.eu/?post_type=mec-events&p=2668 Date: May 31, 2023 | 4 p.m. (UTC+1) Speaker: João Barbosa, INESC TEC & MACC Moderator: Bernd Mohr, Jülich Supercomputing Centre (JSC) In the coming age of exascale computing, traditional post-hoc scientific visualization and analysis suffer similar challenges as numeric simulation. This talk will cover new methodologies of scientific visualization in high-performance computing systems specially designed for […]

The post Webinar: Addressing the challenges of scientific visualization in the exascale age first appeared on RISC2 Project.

]]>

Date: May 31, 2023 | 4 p.m. (UTC+1)

In the coming age of exascale computing, traditional post-hoc scientific visualization and analysis suffer similar challenges as numeric simulation. This talk will cover new methodologies of scientific visualization in high-performance computing systems specially designed for large-scale scientific visualization that provides greater scalability, flexibility, and detail to overcome some of these challenges.

About the speaker: João Barbosa joined the Minho Advanced Computing Center (MACC) in March 2020 as a full-time researcher in High-performance Computing, specializing in Scientific Visualization. Previously, he was part of the Texas Advanced Computing Center (TACC) Scalable Visualization team. As Research Associate at TACC, João has worked on several Scientific Visualization (SciVis) projects ranging from high-level applications such as Gas and Oil to low-level high-performance software packages in partnership with leading hardware and software companies. His current research focuses on high-performance real-time in-situ photo-realistic ray tracing for SciVis.

 

 

The post Webinar: Addressing the challenges of scientific visualization in the exascale age first appeared on RISC2 Project.

]]>
JUPITER Ascending – First European Exascale Supercomputer Coming to Jülich https://www.risc2-project.eu/2023/01/02/jupiter-ascending-first-european-exascale-supercomputer-coming-to-julich/ Mon, 02 Jan 2023 12:14:22 +0000 https://www.risc2-project.eu/?p=2637 It was finally decided in 2022: Forschungszentrum Jülich will be home to Europe’s first exascale computer. The supercomputer is set to be the first in Europe to surpass the threshold of one trillion (“1” followed by 18 zeros) calculations per second. The system will be acquired by the European supercomputing initiative EuroHPC JU. The exascale computer […]

The post JUPITER Ascending – First European Exascale Supercomputer Coming to Jülich first appeared on RISC2 Project.

]]>
It was finally decided in 2022: Forschungszentrum Jülich will be home to Europe’s first exascale computer. The supercomputer is set to be the first in Europe to surpass the threshold of one trillion (“1” followed by 18 zeros) calculations per second. The system will be acquired by the European supercomputing initiative EuroHPC JU. The exascale computer should help to solve important and urgent scientific questions regarding, for example, climate change, how to combat pandemics, and sustainable energy production, while also enabling the intensive use of artificial intelligence and the analysis of large data volumes. The overall costs for the system amount to 500 million euros. Of this total, 250 million euros is being provided by EuroHPC JU and a further 250 million euros in equal parts by the German Federal Ministry of Education and Research (BMBF) and the Ministry of Culture and Science of the State of North Rhine-Westphalia (MKW NRW).

The computer named JUPITER (short for “Joint Undertaking Pioneer for Innovative and Transformative Exascale Research”) will be installed 2023/2024 on the campus of Forschungszentrum Jülich. It is intended that the system will be operated by the Jülich Supercomputing Centre (JSC), whose supercomputers JUWELS and JURECA currently rank among the most powerful in the world. JSC has participated in the application procedure for a high-end supercomputer as a member of the Gauss Centre for Supercomputing (GCS), an association of the three German national supercomputing centres JSC in Jülich, High Performance Computing Stuttgart (HLRS), and Leibniz Computing Centre (LRZ) in Garching. The competition was organized by the European supercomputing initiative EuroHPC JU, which was formed by the European Union together with European countries and private companies. 

JUPITER is now set to become the first European supercomputer to make the leap into the exascale class. In terms of computing power, it will be more powerful that 5 million modern laptops of PCs. Just like Jülich’s current supercomputer JUWELS, JUPITER will be based on a dynamic, modular supercomputing architecture, which Forschungszentrum Jülich developed together with European and international partners in the EU’s DEEP research projects.

In a modular supercomputer, various computing modules are coupled together. This enables program parts of complex simulations to be distributed over several modules, ensuring that the various hardware properties can be optimally utilized in each case. Its modular construction also means that the system is well prepared for integrating future technologies such as quantum computing or neurotrophic modules, which emulate the neural structure of a biological brain.

Figure Modular Supercomputing Architecture: Computing and storage modules of the exascale computer in its basis configuration (blue) as well as optional modules (green) and modules for future technologies (purple) as possible extensions. 

In its basis configuration, JUPITER will have and enormously powerful booster module with highly efficient GPU-based computation accelerators. Massively parallel applications are accelerated by this booster in a similar way to a turbocharger, for example to calculate high-resolution climate models, develop new materials, simulate complex cell processes and energy systems, advanced basic research, or train next-generation, computationally intensive machine-learning algorithms.

One major challenge is the energy that is required for such large computing power. The average power is anticipated to be up to 15 megawatts. JUPITER has been designed as a “green” supercomputer and will be powered by green electricity. The envisaged warm water cooling system should help to ensure that JUPITER achieves the highest efficiency values. At the same time, the cooling technology opens up the possibility of intelligently using the waste heat  that is produced. For example, just like its predecessor system JUWELS, JUPITER will be connected to the new low-temperature network on the Forschungszentrum Jülich campus. Further potential applications for the waste heat from JUPITER are currently being investigated by Forschungszentrum Jülich.

By Jülich Supercomputing Centre (JSC)

 

The first image is JUWELS: Germany’s fastest supercomputer JUWELS at Forschungszentrum Jülich, which is funded in equal parts by the Federal Ministry of Education and Research (BMBF) and the Ministry of Culture and Science of the State of North Rhine-Westphalia (MKW NRW) via the Gauss Centre for Supercomputing (GCS). (Copyright: Forschungszentrum Jülich / Sascha Kreklau)

The post JUPITER Ascending – First European Exascale Supercomputer Coming to Jülich first appeared on RISC2 Project.

]]>
RISC2 cooperated to promote HPC networking in Uruguay https://www.risc2-project.eu/2022/12/20/risc2-cooperated-to-promote-hpc-networking/ Tue, 20 Dec 2022 10:22:24 +0000 https://www.risc2-project.eu/?p=2628 RISC2 organized, together with our partner Universidad de la República Uruguay (UdelaR), a seminar on High-Performance Computing, which took place between October 31 and December 20, 2022. The event aimed to communicate and show the use of HPC in the region. The RISC2 partners which participed as keynote speakers on the event were Santiago Iturriaga, […]

The post RISC2 cooperated to promote HPC networking in Uruguay first appeared on RISC2 Project.

]]>
RISC2 organized, together with our partner Universidad de la República Uruguay (UdelaR), a seminar on High-Performance Computing, which took place between October 31 and December 20, 2022. The event aimed to communicate and show the use of HPC in the region.

The RISC2 partners which participed as keynote speakers on the event were Santiago Iturriaga, with a talk about “ClusterUY Advanced Usage Tutorial”, Esteban Mocskos, with a presentation about “Systems based on blockchain. Studying its behavior as a distributed system: from emulation to simulation”, and Sergio Nesmachnow, with a talk about “Message Passing Interface Tutorial”.

According to Esteban Mocskos, this seminar was very relevant for the RISC2 project as “it is one step to consolidate the HPC network in Latin America (damaged by COVID) and collaborate to the success of future activities”.

The post RISC2 cooperated to promote HPC networking in Uruguay first appeared on RISC2 Project.

]]>
Advanced Computing Collaboration to Growth Sustainable Ecosystems https://www.risc2-project.eu/2022/12/12/advanced-computing-collaboration-to-growth-sustainable-ecosystems/ Mon, 12 Dec 2022 10:45:48 +0000 https://www.risc2-project.eu/?p=2612 The impact of High-Performance Computing (HPC) in different contexts related to the needs of high capabilities and strategies to simulate or to compute is very known. In the development of the RISC2 project, observing the project’s main goals, it is not a potential impact to support scientific challenges recognised after the exploration but an essential […]

The post Advanced Computing Collaboration to Growth Sustainable Ecosystems first appeared on RISC2 Project.

]]>
The impact of High-Performance Computing (HPC) in different contexts related to the needs of high capabilities and strategies to simulate or to compute is very known. In the development of the RISC2 project, observing the project’s main goals, it is not a potential impact to support scientific challenges recognised after the exploration but an essential requirement for scientific, productive, and social activities. Different outcomes are presented in the academic spaces as the workshops and main tracks of the Latin American Conference on High-Performance Computing (CARLA 2023). In these spaces, different RISC2 proposals show how HPC allows competitiveness, demands collaboration to attack global interests, and guarantees sustainability.

In the European and Latin American (EuroLatAm) HPC ecosystems, it tis possible to identify actors in different domains: industry, academy, research, society, and government. Each of them, at different levels, has a group of demands or interactions, depending on the interests. I.e., the industry demands capabilities to have HPC solutions for productivity and wants skills from the academy to perform development actors to build applications to use solutions. Another example could be the relationship between research and the government. In the HPC Ecosystem, collaborations allow synergies to face common interests. Still, it demands policies and coordinated roadmaps to support long-term projects and activities with a clear impact on society.

Of course, a historical relationship exists between Latin America and Europe from colonial history. In the case of advanced computing projects, it is possible to identify, from the first EuroLatAm Grid Computing projects more than twenty years ago until the real supercomputing projects such as RISC and RISC2. Still, now, more with shared interests and the different EuroLatAm HPC projects improve competitiveness and collaboration. Competitiveness for industrial and productive business, partnership (and competitiveness) in science and education goals, and human wellness. So paraphrasing Mateo Valero “who does not compute does not compete”, I would add “who does not collaborate does not survive”.

Taking collaboration and competitiveness, the RISC2 project allows identifying sustainability elements and sustainable workflows for different projects. The impressive interaction between the actors of the HPC EuroLatAm ecosystem has not only given scientific results but also policies, recommendations, best practices, and new questions. For these outcomes, in the past 2022 Supercomputing Conference, RISC2 was awarded the 2022 HPCWire Editors’ Choice Award as the Best HPC Collaboration.

Sustainable advanced computing ecosystems and their growth are evident with the knowledge of the results of projects such as RISC2. Collaboration, interaction, and competitiveness build human development and guarantee development, technological diversification, and peer-to-peer relationships to attack common interests and problems. So, RISC2 is a crucial step to advance to a RISC3 as it was at the time of the previous RISC.

 

By Universidad Industrial de Santander

The post Advanced Computing Collaboration to Growth Sustainable Ecosystems first appeared on RISC2 Project.

]]>
RISC2 project acknowledged by international journal specialised in advanced computing https://www.risc2-project.eu/2022/11/22/risc2-project-acknowledged-by-international-journal-specialised-in-advanced-computing/ Tue, 22 Nov 2022 12:31:16 +0000 https://www.risc2-project.eu/?p=2598 Seminar on High-Performance Scientific Computing https://www.risc2-project.eu/events/concepts-and-tools-for-solving-multiobjective-optimization-problems/ Wed, 09 Nov 2022 09:53:01 +0000 https://www.risc2-project.eu/?post_type=mec-events&p=2539 RISC2 is organizing, together with our partner Universidad de la República Uruguay (UdelaR), a seminar on High-Performance Scientific Computing. RISC2 are participating directly on: ClusterUY Advanced Usage Tutorial November 11, 2022 Santiago Iturriaga, UdelaR Talk: Systems based on blockchain. Studying its behaviour as a distributed system: from emulation to simulation December 8, 2022 Esteban Mocskos, […]

The post Seminar on High-Performance Scientific Computing first appeared on RISC2 Project.

]]>
RISC2 is organizing, together with our partner Universidad de la República Uruguay (UdelaR), a seminar on High-Performance Scientific Computing.

RISC2 are participating directly on:

ClusterUY Advanced Usage Tutorial

November 11, 2022

Santiago Iturriaga, UdelaR

Talk: Systems based on blockchain. Studying its behaviour as a distributed system: from emulation to simulation

December 8, 2022

Esteban Mocskos, CONICET

Message Passing Interface Tutorial

December 13, 2022

Sergio Nesmachnow, UdelaR

Course on “Fundamentals of urban informatics: data analysis and data processing”

December 12 – 16, 2022

Renzo Massobrio (Universidad de Cádiz),  Sergio Nesmachnow (UdelaR), and Sebastián Baña (New York University)

Know more.

Course on “Concepts and tools for solving multiobjective optimization problems”

December 12 – 20, 2022

Diego Rossit (Universidad Nacional del Sur), and Sergio Nesmachnow (UdelaR)

Know more.

The post Seminar on High-Performance Scientific Computing first appeared on RISC2 Project.

]]>