cinvestav - RISC2 Project https://www.risc2-project.eu Fri, 01 Sep 2023 13:49:12 +0000 en-US hourly 1 https://wordpress.org/?v=6.6.2 ABACUS https://www.risc2-project.eu/2023/06/12/abacus/ Mon, 12 Jun 2023 14:09:15 +0000 https://www.risc2-project.eu/?p=2877 System name: ABACUS Location: Centro de Investigación y de Estudios Avanzados del Instituto Politécnico Nacional Areas: Mathematics and applied engineering, data mining Web

The post ABACUS first appeared on RISC2 Project.

]]>
  • System name: ABACUS
  • Location: Centro de Investigación y de Estudios Avanzados del Instituto Politécnico Nacional
  • Areas: Mathematics and applied engineering, data mining
  • Web
  • The post ABACUS first appeared on RISC2 Project.

    ]]>
    Subsequent Progress And Challenges Concerning The México-UE Project ENERXICO: Supercomputing And Energy For México https://www.risc2-project.eu/2023/05/24/subsequent-progress-and-challenges-concerning-the-mexico-ue-project-enerxico-supercomputing-and-energy-for-mexico/ Wed, 24 May 2023 09:38:01 +0000 https://www.risc2-project.eu/?p=2824 In this short notice, we briefly describe some afterward advances and challenges with respect to two work packages developed in the ENERXICO Project. This opened the possibility of collaborating with colleagues from institutions that did not participate in the project, for example from the University of Santander in Colombia and from the University of Vigo […]

    The post Subsequent Progress And Challenges Concerning The México-UE Project ENERXICO: Supercomputing And Energy For México first appeared on RISC2 Project.

    ]]>
    In this short notice, we briefly describe some afterward advances and challenges with respect to two work packages developed in the ENERXICO Project. This opened the possibility of collaborating with colleagues from institutions that did not participate in the project, for example from the University of Santander in Colombia and from the University of Vigo in Spain. This exemplifies the importance of the RISC2 project in the sense that strengthening collaboration and finding joint research areas and HPC applied ventures is of great benefit for both: our Latin American Countries and the EU. We are now initiating talks to target several Energy related topics with some of the RISC2 partners. 

    The ENERXICO Project focused on developing advanced simulation software solutions for oil & gas, wind energy and transportation powertrain industries.  The institutions that collaborated in the project are for México: ININ (Institution responsible for México), Centro de Investigación y de Estudios Avanzados del IPN (Cinvestav), Universidad Nacional Autónoma de México (UNAM IINGEN, FCUNAM), Universidad Autónoma Metropolitana-Azcapotzalco, Instituto Mexicano del Petróleo, Instituto Politécnico Nacional (IPN) and Pemex, and for the European Union: Centro de Supercómputo de Barcelona (Institution responsible for the EU), Technische Universitäts München, Alemania (TUM), Universidad de Grenoble Alpes, Francia (UGA), CIEMAT, España, Repsol, Iberdrola, Bull, Francia e Universidad Politécnica de Valencia, España.  

    The Project contemplated four working packages (WP): 

    WP1 Exascale Enabling: This was a cross-cutting work package that focused on assessing performance bottlenecks and improving the efficiency of the HPC codes proposed in vertical WP (UE Coordinator: BULL, MEX Coordinator: CINVESTAV-COMPUTACIÓN); 

    WP2 Renewable energies:  This WP deployed new applications required to design, optimize and forecast the production of wind farms (UE Coordinator: IBR, MEX Coordinator: ININ); 

    WP3 Oil and gas energies: This WP addressed the impact of HPC on the entire oil industry chain (UE Coordinator: REPSOL, MEX Coordinator: ININ); 

    WP4 Biofuels for transport: This WP displayed advanced numerical simulations of biofuels under conditions similar to those of an engine (UE Coordinator: UPV-CMT, MEX Coordinator: UNAM); 

    For WP1 the following codes were optimized for exascale computers: Alya, Bsit, DualSPHysics, ExaHyPE, Seossol, SEM46 and WRF.   

    As an example, we present some of the results for the DualPHYysics code. We evaluated two architectures: The first set of hardware used were identical nodes, each equipped with 2 ”Intel Xeon Gold 6248 Processors”, clocking at 2.5 GHz with about 192 GB of system memory. Each node contained 4 Nvidia V100 Tesla GPUs with 32 GB of main memory each. The second set of hardware used were identical nodes, each equipped with 2 ”AMD Milan 7763 Processors”, clocking at 2.45 GHz with about 512 GB of system memory. Each node contained 4 Nvidia V100 Ampere GPUs with 40 GB of main memory each. The code was compiled and linked with CUDA 10.2 and OpenMPI 4. The application was executed using one GPU per MPI rank. 

    In Figures 1 and 2 we show the scalability of the code for the strong and weak scaling tests that indicate that the scaling is very good. Motivated by these excellent results, we are in the process of performing in the LUMI supercomputer new SPH simulations with up to 26,834 million particles that will be run with up to 500 GPUs, which is 53.7 million particles per GPU. These simulations will be done initially for a Wave Energy Converter (WEC) Farm (see Figure 3), and later for turbulent models. 

    Figure 1. Strong scaling test with a fix number of particles but increasing number of GPUs.

     

    Figure 2. Weak scaling test with increasing number of particles and GPUs.

     

    Figure 3. Wave Energy Converter (WEC) Farm (taken from https://corpowerocean.com/)

     

    As part of WP3, ENERXICO developed a first version of a computer code called Black Hole (or BH code) for the numerical simulation of oil reservoirs, based on the numerical technique known as Smoothed Particle Hydrodynamics or SPH. This new code is an extension of the DualSPHysics code (https://dual.sphysics.org/) and is the first SPH based code that has been developed for the numerical simulation of oil reservoirs and has important benefits versus commercial codes based on other numerical techniques.  

    The BH code is a large-scale massively parallel reservoir simulator capable of performing simulations with billions of “particles” or fluid elements that represent the system under study. It contains improved multi-physics modules that automatically combine the effects of interrelated physical and chemical phenomena to accurately simulate in-situ recovery processes. This has led to the development of a graphical user interface, considered as a multiple-platform application for code execution and visualization, and for carrying out simulations with data provided by industrial partners and performing comparisons with available commercial packages.  

    Furthermore, a considerable effort is presently being made to simplify the process of setting up the input for reservoir simulations from exploration data by means of a workflow fully integrated in our industrial partners’ software environment.  A crucial part of the numerical simulations is the equation of state.  We have developed an equation of state based on crude oil data (the so-called PVT) in two forms, the first as a subroutine that is integrated into the code, and the second as an interpolation subroutine of properties’ tables that are generated from the equation of state subroutine.  

    An oil reservoir is composed of a porous medium with a multiphase fluid made of oil, gas, rock and other solids. The aim of the code is to simulate fluid flow in a porous medium, as well as the behaviour of the system at different pressures and temperatures.  The tool should allow the reduction of uncertainties in the predictions that are carried out. For example, it may answer questions about the benefits of injecting a solvent, which could be CO2, nitrogen, combustion gases, methane, etc. into a reservoir, and the times of eruption of the gases in the production wells. With these estimates, it can take the necessary measures to mitigate their presence, calculate the expense, the pressure to be injected, the injection volumes and most importantly, where and for how long. The same happens with more complex processes such as those where fluids, air or steam are injected, which interact with the rock, oil, water and gas present in the reservoir. The simulator should be capable of monitoring and preparing measurement plans. 

    In order to be able to perform a simulation of a reservoir oil field, an initial model needs to be created.  Using geophysical forward and inverse numerical techniques, the ENERXICO project evaluated novel, high-performance simulation packages for challenging seismic exploration cases that are characterized by extreme geometric complexity. Now, we are undergoing an exploration of high-order methods based upon fully unstructured tetrahedral meshes and also tree-structured Cartesian meshes with adaptive mesh refinement (AMR) for better spatial resolution. Using this methodology, our packages (and some commercial packages) together with seismic and geophysical data of naturally fractured reservoir oil fields, are able to create the geometry (see Figure 4), and exhibit basic properties of the oil reservoir field we want to study.  A number of numerical simulations are performed and from these oil fields exploitation scenarios are generated.

     

    Figure 4. A detail of the initial model for a SPH simulation of a porous medium.

     

    More information about the ENERXICO Project can be found in: https://enerxico-project.eu/

    By: Jaime Klapp (ININ, México) and Isidoro Gitler (Cinvestav, México)

     

     

     

     

    The post Subsequent Progress And Challenges Concerning The México-UE Project ENERXICO: Supercomputing And Energy For México first appeared on RISC2 Project.

    ]]>
    Xiuhcoatl https://www.risc2-project.eu/2022/04/22/xiuhcoatl/ Fri, 22 Apr 2022 09:35:54 +0000 https://www.risc2-project.eu/?p=1899 Title: Xiuhcoatl System name: Clúster Híbrido de Supercómputo Location: CGSTIC – CINVESTAV Web OS: Linux CentOS 6.X Country: Mexico Areas: Mathematics, engineering and applied physics Processor architecture: CPU x86 in 213 nodes: 67 nodes AMD Interlagos 6274. 84 nodes Intel X5675. 62 nodes Intel E5-V4. GPU/Co-processors in 40 nodes: 5 Nodes GPUs NVIDIA 2070/2075 & […]

    The post Xiuhcoatl first appeared on RISC2 Project.

    ]]>
  • Title: Xiuhcoatl
  • System name: Clúster Híbrido de Supercómputo
  • Location: CGSTIC – CINVESTAV
  • Web
  • OS: Linux CentOS 6.X
  • Country: Mexico
  • Areas: Mathematics, engineering and applied physics
  • Processor architecture:
    • CPU x86 in 213 nodes:
      • 67 nodes AMD Interlagos 6274.
      • 84 nodes Intel X5675.
      • 62 nodes Intel E5-V4.
    • GPU/Co-processors in 40 nodes:
      • 5 Nodes GPUs NVIDIA 2070/2075 & Intel X5675.
      • 12 Nodes GPUs NVIDIA K40 & Intel E5-2650L v3.
      • 4 Nodes Xeon-Phi 7120P.
      • 19 Nodes GPUs NVIDIA K80 Intel E5-2660 v3.
    • Manufacture: Hybrid cluster (INTEL, AMD, NVIDIA-GPU and INTEL co-processors)
    • Peak performance:
      • 313 Tflops
    • Access Policy

    The post Xiuhcoatl first appeared on RISC2 Project.

    ]]>
    ABACUS I https://www.risc2-project.eu/2022/04/22/abacus-i/ Fri, 22 Apr 2022 09:29:53 +0000 https://www.risc2-project.eu/?p=1896 Title: ABACUS I System name: ABACUS I Location: Laboratory for Applied Mathematics and HPC (ABACUS – CINVESTAV) Web OS: Linux Country: Mexico Processor architecture: Two sub-systems: SGI ICE-XA (CPU nodes) and SGI ICE-X (GPU nodes) Manufacture: SGI Peak performance: 429 Tflops (284 Tflops Linpack CPUs + 145 Tflops Linpack GPUs) Access Policy: N/D

    The post ABACUS I first appeared on RISC2 Project.

    ]]>
  • Title: ABACUS I
  • System name: ABACUS I
  • Location: Laboratory for Applied Mathematics and HPC (ABACUS – CINVESTAV)
  • Web
  • OS: Linux
  • Country: Mexico
  • Processor architecture:
    • Two sub-systems: SGI ICE-XA (CPU nodes) and SGI ICE-X (GPU nodes)
  • Manufacture: SGI
  • Peak performance:
    • 429 Tflops (284 Tflops Linpack CPUs + 145 Tflops Linpack GPUs)
  • Access Policy: N/D
  • The post ABACUS I first appeared on RISC2 Project.

    ]]>
    CINVESTAV hosts a meeting with representatives of supercomputing centers in Mexico https://www.risc2-project.eu/2022/01/18/cinvestav-hosts-a-meeting-with-representatives-of-supercomputing-centers-in-mexico/ Tue, 18 Jan 2022 10:48:29 +0000 https://www.risc2-project.eu/?p=1535 CINVESTAV, one of RISC2’s partners, hosted a virtual meeting with representatives from the Barcelona Supercomputing Center (BSC) and the HPC community, in Mexico on the 7th of December. Mexico has a solid tradition in scientific research and education and has excellent human resources in the computer science field. Participants focused on promoting cooperation between supercomputing […]

    The post CINVESTAV hosts a meeting with representatives of supercomputing centers in Mexico first appeared on RISC2 Project.

    ]]>
    CINVESTAV, one of RISC2’s partners, hosted a virtual meeting with representatives from the Barcelona Supercomputing Center (BSC) and the HPC community, in Mexico on the 7th of December.

    Mexico has a solid tradition in scientific research and education and has excellent human resources in the computer science field. Participants focused on promoting cooperation between supercomputing centers across the country. With that aim, they focused on the need to improve connectivity and engage on networks that allow for efficient and effective coordination among centers.

    Leveraging from other cooperation-focused initiatives, like the Red Española de Supercomputación (RES) in Spain and PRACE at the European level, the Mexican HPC community is seeking to kick-off a nationwide collaboration network.

     

    The post CINVESTAV hosts a meeting with representatives of supercomputing centers in Mexico first appeared on RISC2 Project.

    ]]>
    RISC2 with a strong presence at CARLA 2021 https://www.risc2-project.eu/2021/10/05/https-www-risc2-project-eu-2021-10-05-risc2-with-a-strong-presence-at-carla-2021/ Tue, 05 Oct 2021 08:20:04 +0000 http://192.168.10.124/risc/?p=1016 The RISC2 project participated at the Latin America High-Performance Computing Conference (CARLA 2021), which took place between September 27 and October 15, 2021, with 888 registered attendees from 25 different countries. The consortium of the RISC2 project participated in the organization of several activities during this international conference, within the scope of the collaboration between […]

    The post RISC2 with a strong presence at CARLA 2021 first appeared on RISC2 Project.

    ]]>
    The RISC2 project participated at the Latin America High-Performance Computing Conference (CARLA 2021), which took place between September 27 and October 15, 2021, with 888 registered attendees from 25 different countries. The consortium of the RISC2 project participated in the organization of several activities during this international conference, within the scope of the collaboration between Europe and Latin America communities, working on HPC-related topics.

    CARLA is an international conference aimed at providing a forum to foster the growth and strength of the High-Performance Computing (HPC) community in Latin America through the exchange and dissemination of new ideas, techniques, and research in HPC and its applications areas. The general chair of the 2021 edition is Isidoro Gitler, from Cinvestav, who coordinates and participates in all the event’s activities.

    Workshops

    Different partners of the RISC2 were involved in the organization of the scientific workshops as part of the CARLA 2021 conference. Between seven workshops organized to this conference, two of them are from the RISC2 consortium. The workshop on HPC Collaboration between Europe and Latin America took place online, on October 5, 2021. The goal was to provide a space dedicated to exchange experiences, towards the promotion and support of new collaborations across different countries of Europe and Latin America, within the framework of the recently launched ‘A network for supporting the coordination of High-Performance Computing research between Europe and Latin America’ (RISC2). This workshop achieved a maximum number of 55 participants, with Pedro Vieira Alberto, from the University of Coimbra, as one of the Invited Speakers. Ulisses Cortés, from Barcelona Supercomputing Center, presented the RISC2 project, as an example of HPC collaboration between Europe and Latin America. The chairs of this event were Ulisses Cortés and Rafael Mayo-García, from CIEMAT.

    Another workshop organized by the RISC2 team was the workshop on HPC an Energy, which was held online on October 4, with Álvaro Coutinho, from COPPE, as Chair. This workshop focused on HPC techniques applied to the energy sector, in order to improve and reform many industrial sectors. HPC can provide several solutions to the energy sector, e.g., oil and gas solutions in upstream, midstream, and downstream problems; improve wind energy performance; solve issues of combustion efficiency for transportation systems; making nuclear systems more efficient and safer; improving solar energy systems; optimizing wind energy systems; improving the quality and efficiency of seismic and geophysical simulations, etc.

    It is important to mention that Ginés Guerrero, from NLHPC, was one of the chairs of workshops organized by CARLA 2021.

    Tutorials

    Carla Osthoff, from Laboratório Nacional de Computação Científica, was Tutorial Chair Member of the 11 tutorials that were accepted at the CARLA 2021. CARLA 2021 provided tutorials and hands-on workshops for both introductory and advanced levels, specifically designed to undergraduate and master students all over Latin- American countries. There were two different periods: Fundamental Tutorials, which included six different tutorials the week before CARLA 2021, and Advanced Tutorials, with five different tutorials held a week after CARLA 2021. CARLA 2021 Tutorials were supported by Latin American, Caribbean, and European institutions.
    Esteban Mosckos, from the Universidad de Buenos Aires, was involved in the organization of two different tutorials: “OpenMP: Introduction to shared memory models” and “Introduction to Distributed Memory Models using MPI”. Both activities consisted of theory and hands-on exercises, lasting four hours, with close to 40 assistants each.

    The NLHPC partner was also responsible for a tutorial focused on Working with a resource manager on an HPC infrastructure, for the use of SLURM. Two more tutorials were organized by NLHPC, including, the tutorial on Performance Analysis Tools, with the participation of one of the members of the NLHPC Scientific Committee, and the other on Quantum Computing, with the participation of IBM.

    The CARLA 2021 conference had more than 30 institutions on the board committee and more than 115 in attendance, connected simultaneously.

    All the videos are available here.

    The post RISC2 with a strong presence at CARLA 2021 first appeared on RISC2 Project.

    ]]>