nvidia - RISC2 Project https://www.risc2-project.eu Fri, 01 Sep 2023 13:49:12 +0000 en-US hourly 1 https://wordpress.org/?v=6.6.2 Hypatia https://www.risc2-project.eu/2023/06/11/hypatia/ Sun, 11 Jun 2023 09:05:30 +0000 https://www.risc2-project.eu/?p=2207 Title: Hypatia System name: Hypatia Location: Universidad de los Andes Colombia – Data Center (Bogotá) Web OS: Linux CentOS 7 Country: Colombia Processor architecture: Master Node: 1 PowerEdge R640 Server: 2 x Intel® Xeon® Silver 4210R 2.4G, 10C/20T, 9.6GT/s, 13.75M Cache, Turbo, HT (100W) DDR4-2400. Mellanox ConnectX-6 Single Port HDR100 QSFP56 Infiniband Adapter Compute Node: […]

The post Hypatia first appeared on RISC2 Project.

]]>
  • Title: Hypatia
  • System name: Hypatia
  • Location: Universidad de los Andes Colombia – Data Center (Bogotá)
  • Web
  • OS: Linux CentOS 7
  • Country: Colombia
  • Processor architecture:
    • Master Node: 1 PowerEdge R640 Server: 2 x Intel® Xeon® Silver 4210R 2.4G, 10C/20T, 9.6GT/s, 13.75M Cache, Turbo, HT (100W) DDR4-2400. Mellanox ConnectX-6 Single Port HDR100 QSFP56 Infiniband Adapter
    • Compute Node:  
      • 10 PowerEdge R640 Server: 2 x Intel® Xeon® Gold 6242R 3.1G, 20C/40T, 10.4GT/s, 27.5M Cache, Turbo, HT (205W) DDR4-2933. Mellanox ConnectX-6 Single Port HDR100 QSFP56 Infiniband Adapter
      • 3 PowerEdge R6525 Server 256 GB: 2 x AMD EPYC 7402 2.80GHz, 24C/48T, 128M Cache (180W) DDR4-3200. Mellanox ConnectX-6 Single Port HDR100 QSFP56 Infiniband Adapter
      • 2 PowerEdge R6525 Server 512 GB: 2 x AMD EPYC 7402 2.80GHz, 24C/48T, 128M Cache (180W) DDR4-3200. Mellanox ConnectX-6 Single Port HDR100 QSFP56 Infiniband Adapter
      • 1 PowerEdge R6525 Server 1 TB: 2 x AMD EPYC 7402 2.80GHz, 24C/48T, 128M Cache (180W) DDR4-3200. Mellanox ConnectX-6 Single Port HDR100 QSFP56 Infiniband Adapter
      • 2 PowerEdge R740 Server: 3 x NVIDIA® Quadro® RTX6000 24 GB, 250W, Dual Slot, PCle x16 Passice Cooled, Full Height GPU. Intel® Xeon® Gold 6226R 2.9GHz, 16C/32T, 10.4GT/s, 22M Cache, Turbo, HT (150W) DDR4-2933. Mellanox ConnectX-6 Single Port HDR100 QSFP56 Infiniband Adapter
    • Storage:
      • 1Dell EMC ME4084 SAS OST – 84 X 4TB HDD 7.2K 512n SAS12 3.5
      • 1 Dell EMC ME4024 SAS MDT – 24 X 960 GB SSD SAS Read Intensive 12Gbps 512e 2.5in Hot-plug Drive, PM5-R, 1DWPD, 1752 TBW
      • 4 PowerEdge R740 Server: 2 x Intel® Xeon® Gold 6230R 2.1G, 26C/52T, 10.4GT/s, 35.75M Cache, Turbo, HT 150W) DDR4-2933. Mellanox ConnectX-6 Single Port HDR100 QSFP56 Infiniband Adapter
  • Vendor: DELL
  • Peak performance: TBC
  • Access Policy: TBC
  • Main research domains: TBC
  • The post Hypatia first appeared on RISC2 Project.

    ]]>
    Subsequent Progress And Challenges Concerning The México-UE Project ENERXICO: Supercomputing And Energy For México https://www.risc2-project.eu/2023/05/24/subsequent-progress-and-challenges-concerning-the-mexico-ue-project-enerxico-supercomputing-and-energy-for-mexico/ Wed, 24 May 2023 09:38:01 +0000 https://www.risc2-project.eu/?p=2824 In this short notice, we briefly describe some afterward advances and challenges with respect to two work packages developed in the ENERXICO Project. This opened the possibility of collaborating with colleagues from institutions that did not participate in the project, for example from the University of Santander in Colombia and from the University of Vigo […]

    The post Subsequent Progress And Challenges Concerning The México-UE Project ENERXICO: Supercomputing And Energy For México first appeared on RISC2 Project.

    ]]>
    In this short notice, we briefly describe some afterward advances and challenges with respect to two work packages developed in the ENERXICO Project. This opened the possibility of collaborating with colleagues from institutions that did not participate in the project, for example from the University of Santander in Colombia and from the University of Vigo in Spain. This exemplifies the importance of the RISC2 project in the sense that strengthening collaboration and finding joint research areas and HPC applied ventures is of great benefit for both: our Latin American Countries and the EU. We are now initiating talks to target several Energy related topics with some of the RISC2 partners. 

    The ENERXICO Project focused on developing advanced simulation software solutions for oil & gas, wind energy and transportation powertrain industries.  The institutions that collaborated in the project are for México: ININ (Institution responsible for México), Centro de Investigación y de Estudios Avanzados del IPN (Cinvestav), Universidad Nacional Autónoma de México (UNAM IINGEN, FCUNAM), Universidad Autónoma Metropolitana-Azcapotzalco, Instituto Mexicano del Petróleo, Instituto Politécnico Nacional (IPN) and Pemex, and for the European Union: Centro de Supercómputo de Barcelona (Institution responsible for the EU), Technische Universitäts München, Alemania (TUM), Universidad de Grenoble Alpes, Francia (UGA), CIEMAT, España, Repsol, Iberdrola, Bull, Francia e Universidad Politécnica de Valencia, España.  

    The Project contemplated four working packages (WP): 

    WP1 Exascale Enabling: This was a cross-cutting work package that focused on assessing performance bottlenecks and improving the efficiency of the HPC codes proposed in vertical WP (UE Coordinator: BULL, MEX Coordinator: CINVESTAV-COMPUTACIÓN); 

    WP2 Renewable energies:  This WP deployed new applications required to design, optimize and forecast the production of wind farms (UE Coordinator: IBR, MEX Coordinator: ININ); 

    WP3 Oil and gas energies: This WP addressed the impact of HPC on the entire oil industry chain (UE Coordinator: REPSOL, MEX Coordinator: ININ); 

    WP4 Biofuels for transport: This WP displayed advanced numerical simulations of biofuels under conditions similar to those of an engine (UE Coordinator: UPV-CMT, MEX Coordinator: UNAM); 

    For WP1 the following codes were optimized for exascale computers: Alya, Bsit, DualSPHysics, ExaHyPE, Seossol, SEM46 and WRF.   

    As an example, we present some of the results for the DualPHYysics code. We evaluated two architectures: The first set of hardware used were identical nodes, each equipped with 2 ”Intel Xeon Gold 6248 Processors”, clocking at 2.5 GHz with about 192 GB of system memory. Each node contained 4 Nvidia V100 Tesla GPUs with 32 GB of main memory each. The second set of hardware used were identical nodes, each equipped with 2 ”AMD Milan 7763 Processors”, clocking at 2.45 GHz with about 512 GB of system memory. Each node contained 4 Nvidia V100 Ampere GPUs with 40 GB of main memory each. The code was compiled and linked with CUDA 10.2 and OpenMPI 4. The application was executed using one GPU per MPI rank. 

    In Figures 1 and 2 we show the scalability of the code for the strong and weak scaling tests that indicate that the scaling is very good. Motivated by these excellent results, we are in the process of performing in the LUMI supercomputer new SPH simulations with up to 26,834 million particles that will be run with up to 500 GPUs, which is 53.7 million particles per GPU. These simulations will be done initially for a Wave Energy Converter (WEC) Farm (see Figure 3), and later for turbulent models. 

    Figure 1. Strong scaling test with a fix number of particles but increasing number of GPUs.

     

    Figure 2. Weak scaling test with increasing number of particles and GPUs.

     

    Figure 3. Wave Energy Converter (WEC) Farm (taken from https://corpowerocean.com/)

     

    As part of WP3, ENERXICO developed a first version of a computer code called Black Hole (or BH code) for the numerical simulation of oil reservoirs, based on the numerical technique known as Smoothed Particle Hydrodynamics or SPH. This new code is an extension of the DualSPHysics code (https://dual.sphysics.org/) and is the first SPH based code that has been developed for the numerical simulation of oil reservoirs and has important benefits versus commercial codes based on other numerical techniques.  

    The BH code is a large-scale massively parallel reservoir simulator capable of performing simulations with billions of “particles” or fluid elements that represent the system under study. It contains improved multi-physics modules that automatically combine the effects of interrelated physical and chemical phenomena to accurately simulate in-situ recovery processes. This has led to the development of a graphical user interface, considered as a multiple-platform application for code execution and visualization, and for carrying out simulations with data provided by industrial partners and performing comparisons with available commercial packages.  

    Furthermore, a considerable effort is presently being made to simplify the process of setting up the input for reservoir simulations from exploration data by means of a workflow fully integrated in our industrial partners’ software environment.  A crucial part of the numerical simulations is the equation of state.  We have developed an equation of state based on crude oil data (the so-called PVT) in two forms, the first as a subroutine that is integrated into the code, and the second as an interpolation subroutine of properties’ tables that are generated from the equation of state subroutine.  

    An oil reservoir is composed of a porous medium with a multiphase fluid made of oil, gas, rock and other solids. The aim of the code is to simulate fluid flow in a porous medium, as well as the behaviour of the system at different pressures and temperatures.  The tool should allow the reduction of uncertainties in the predictions that are carried out. For example, it may answer questions about the benefits of injecting a solvent, which could be CO2, nitrogen, combustion gases, methane, etc. into a reservoir, and the times of eruption of the gases in the production wells. With these estimates, it can take the necessary measures to mitigate their presence, calculate the expense, the pressure to be injected, the injection volumes and most importantly, where and for how long. The same happens with more complex processes such as those where fluids, air or steam are injected, which interact with the rock, oil, water and gas present in the reservoir. The simulator should be capable of monitoring and preparing measurement plans. 

    In order to be able to perform a simulation of a reservoir oil field, an initial model needs to be created.  Using geophysical forward and inverse numerical techniques, the ENERXICO project evaluated novel, high-performance simulation packages for challenging seismic exploration cases that are characterized by extreme geometric complexity. Now, we are undergoing an exploration of high-order methods based upon fully unstructured tetrahedral meshes and also tree-structured Cartesian meshes with adaptive mesh refinement (AMR) for better spatial resolution. Using this methodology, our packages (and some commercial packages) together with seismic and geophysical data of naturally fractured reservoir oil fields, are able to create the geometry (see Figure 4), and exhibit basic properties of the oil reservoir field we want to study.  A number of numerical simulations are performed and from these oil fields exploitation scenarios are generated.

     

    Figure 4. A detail of the initial model for a SPH simulation of a porous medium.

     

    More information about the ENERXICO Project can be found in: https://enerxico-project.eu/

    By: Jaime Klapp (ININ, México) and Isidoro Gitler (Cinvestav, México)

     

     

     

     

    The post Subsequent Progress And Challenges Concerning The México-UE Project ENERXICO: Supercomputing And Energy For México first appeared on RISC2 Project.

    ]]>
    CLUSTER UY https://www.risc2-project.eu/2022/04/22/cluster-uy/ Fri, 22 Apr 2022 10:13:10 +0000 https://www.risc2-project.eu/?p=1910 Title: CLUSTER UY System name: CLUSTER UY Location: National Supercomputing Center –Datacenter Ing. José Luis Massera – Antel Web OS: Linux CentOS 7 Country: Uruguay Processor architecture: 1216 CPU computing cores (1120 Intel Xeon-Gold 6138 2.00GHz cores and 96 AMD EPYC 7642 2.30GHz cores). 3.8 TB of RAM 28 Nvidia Tesla P100 GPU cards with […]

    The post CLUSTER UY first appeared on RISC2 Project.

    ]]>
  • Title: CLUSTER UY
  • System name: CLUSTER UY
  • Location: National Supercomputing Center –Datacenter Ing. José Luis Massera – Antel
  • Web
  • OS: Linux CentOS 7
  • Country: Uruguay
  • Processor architecture:
    • 1216 CPU computing cores (1120 Intel Xeon-Gold 6138 2.00GHz cores and 96 AMD EPYC 7642 2.30GHz cores).
    • 3.8 TB of RAM
    • 28 Nvidia Tesla P100 GPU cards with 12Gb of memory (a total number of 100352 GPU cores).
  • Vendor: N/D
  • Peak performance:
    • 327 Tflops
  • Access Policy
  • Main research domains: Astronomy, Bioinformatics, Biology, Computer graphics, Computer Sciences, Data analysis, Energy, Engineering, Geoinformatics, Mathematics, Optimization, Physics, Social Sciences, Statistics
  • The post CLUSTER UY first appeared on RISC2 Project.

    ]]>
    Miztli https://www.risc2-project.eu/2022/04/22/miztli/ Fri, 22 Apr 2022 10:04:08 +0000 https://www.risc2-project.eu/?p=1908 Title: Miztli System name: Miztli Location: Dirección General de Cómputo y de Tecnologías de Información y Comunicación de la Universidad Nacional Autónoma de México Web OS: RedHat Enterprise Linux, Scientific Linux RedHat Enterprise. Country: Mexico Country: Mexico Areas: Astrophysics, geophysics, applied physics, mathematics and engineering Processor architecture: X86 processor: 332 nodes HP Proliant SL230 y SL250. […]

    The post Miztli first appeared on RISC2 Project.

    ]]>
  • Title: Miztli
  • System name: Miztli
  • LocationDirección General de Cómputo y de Tecnologías de Información y Comunicación de la Universidad Nacional Autónoma de México
  • Web
  • OS: RedHat Enterprise Linux, Scientific Linux RedHat Enterprise.
  • Country: Mexico
  • Country: Mexico
  • Areas: Astrophysics, geophysics, applied physics, mathematics and engineering
  • Processor architecture:
    • X86 processor:
      • 332 nodes HP Proliant SL230 y SL250.
      • 2 processors Intel Xeon per node.
      • 5312 cores.
    • GPU processor: 16 GPUs NVIDIA M2090.
  • Manufacture: HP
  • Peak performance:
    • 118 Tflops
  • Access Policy
  • The post Miztli first appeared on RISC2 Project.

    ]]>
    CUETLAXCOAPAN https://www.risc2-project.eu/2022/04/22/cuetlaxcoapan/ Fri, 22 Apr 2022 09:42:41 +0000 https://www.risc2-project.eu/?p=1904 Title: CUETLAXCOAPAN System name: CUETLAXCOAPAN Location: Laboratorio Nacional de Supercómputo del Sureste de México (LNS) Web OS: N/D Country: Mexico Processor architecture: 228 nodes Thin (5472 cores). 2 CPU of 12 cores and 128 GB RAM. 20 nodes Fat (480 núcleos). 2 CPU of 12 cores and 512 GB RAM. 20 nodes semi Fat (480 […]

    The post CUETLAXCOAPAN first appeared on RISC2 Project.

    ]]>
  • Title: CUETLAXCOAPAN
  • System name: CUETLAXCOAPAN
  • Location: Laboratorio Nacional de Supercómputo del Sureste de México (LNS)
  • Web
  • OS: N/D
  • Country: Mexico
  • Processor architecture:
    • 228 nodes Thin (5472 cores). 2 CPU of 12 cores and 128 GB RAM.
    • 20 nodes Fat (480 núcleos). 2 CPU of 12 cores and 512 GB RAM.
    • 20 nodes semi Fat (480 núcleos). 2 CPU of 12 cores and 256 GB RAM.
    • 2 nodes Big Fat (120 núcleos). 2 CPU of 30 cores and 1024 GB RAM (1TB).
    • 2 nodes with GPU with 2 cards K40 Nvidia (11520 cores CUDA). 2 CPU of 12 cores and 128 GB RAM.
    • 2 nodes with MIC with 2 cards 7120P Intel (244 cores MIC). 2 CPU of 12 cores and 128 GB RAM.
    • Manufacture: Fujitsu
    • Peak performance:
      • 250 Tflops
    • Access Policy

    The post CUETLAXCOAPAN first appeared on RISC2 Project.

    ]]>
    THUBAT KAAL II https://www.risc2-project.eu/2022/04/22/thubat-kaal-ii/ Fri, 22 Apr 2022 09:39:18 +0000 https://www.risc2-project.eu/?p=1902 Title: THUBAT KAAL II System name: THUBAT KAAL II Location: Centro Nacional de Supercómputo (CNS) Web OS: RedHat. 7.3 Country: Mexico Processor architecture: 86 nodes: 82 INTEL XEON X86 BITS SKYLAKE & 4 nodes with 4 cards NVIDIA p100 each Workload manager: SLURM Storage system: Lustre High availability 1.7 PB. Network: Infiniband EDR 100GBPS – […]

    The post THUBAT KAAL II first appeared on RISC2 Project.

    ]]>
  • Title: THUBAT KAAL II
  • System name: THUBAT KAAL II
  • Location: Centro Nacional de Supercómputo (CNS)
  • Web
  • OS: RedHat. 7.3
  • Country: Mexico
  • Processor architecture:
    • 86 nodes: 82 INTEL XEON X86 BITS SKYLAKE & 4 nodes with 4 cards NVIDIA p100 each
    • Workload manager: SLURM
    • Storage system: Lustre High availability 1.7 PB.
    • Network: Infiniband EDR 100GBPS – All to All topology
    • Manufacture: ATOS Bull
    • Peak performance:
      • 257 Tflops
    • Access Policy

    The post THUBAT KAAL II first appeared on RISC2 Project.

    ]]>
    Xiuhcoatl https://www.risc2-project.eu/2022/04/22/xiuhcoatl/ Fri, 22 Apr 2022 09:35:54 +0000 https://www.risc2-project.eu/?p=1899 Title: Xiuhcoatl System name: Clúster Híbrido de Supercómputo Location: CGSTIC – CINVESTAV Web OS: Linux CentOS 6.X Country: Mexico Areas: Mathematics, engineering and applied physics Processor architecture: CPU x86 in 213 nodes: 67 nodes AMD Interlagos 6274. 84 nodes Intel X5675. 62 nodes Intel E5-V4. GPU/Co-processors in 40 nodes: 5 Nodes GPUs NVIDIA 2070/2075 & […]

    The post Xiuhcoatl first appeared on RISC2 Project.

    ]]>
  • Title: Xiuhcoatl
  • System name: Clúster Híbrido de Supercómputo
  • Location: CGSTIC – CINVESTAV
  • Web
  • OS: Linux CentOS 6.X
  • Country: Mexico
  • Areas: Mathematics, engineering and applied physics
  • Processor architecture:
    • CPU x86 in 213 nodes:
      • 67 nodes AMD Interlagos 6274.
      • 84 nodes Intel X5675.
      • 62 nodes Intel E5-V4.
    • GPU/Co-processors in 40 nodes:
      • 5 Nodes GPUs NVIDIA 2070/2075 & Intel X5675.
      • 12 Nodes GPUs NVIDIA K40 & Intel E5-2650L v3.
      • 4 Nodes Xeon-Phi 7120P.
      • 19 Nodes GPUs NVIDIA K80 Intel E5-2660 v3.
    • Manufacture: Hybrid cluster (INTEL, AMD, NVIDIA-GPU and INTEL co-processors)
    • Peak performance:
      • 313 Tflops
    • Access Policy

    The post Xiuhcoatl first appeared on RISC2 Project.

    ]]>
    LEO ATROX https://www.risc2-project.eu/2022/04/22/leo-atrox/ Fri, 22 Apr 2022 09:27:07 +0000 https://www.risc2-project.eu/?p=1894 Title: LEO ATROX System name: LEO ATROX Location: Center for Data Analysis and Supercomputing (CADS) – University of Guadalajara Web OS: N/D Country: Mexico Processor architecture: 150 nodes in total: 140 nodes XEON Gold. 4 nodes Fat. 2 nodes NVIDIA Tesla P100.150 nodes in total: 4 nodes XEON PHI. Manufacture: Fujitsu Peak performance: 504 TFlops […]

    The post LEO ATROX first appeared on RISC2 Project.

    ]]>
  • Title: LEO ATROX
  • System name: LEO ATROX
  • Location: Center for Data Analysis and Supercomputing (CADS) – University of Guadalajara
  • Web
  • OS: N/D
  • Country: Mexico
  • Processor architecture:
    • 150 nodes in total:
      • 140 nodes XEON Gold.
      • 4 nodes Fat.
      • 2 nodes NVIDIA Tesla P100.150 nodes in total:
      • 4 nodes XEON PHI.
  • Manufacture: Fujitsu
  • Peak performance: 504 TFlops
  • Access Policy: N/D
  • The post LEO ATROX first appeared on RISC2 Project.

    ]]>
    Kabré Supercomputer https://www.risc2-project.eu/2022/04/22/kabre-supercomputer/ Fri, 22 Apr 2022 09:12:36 +0000 https://www.risc2-project.eu/?p=1885 Title: Kabré Supercomputer System name: Kabré Supercomputer Location: National High Technology Center (CeNAT) Web OS: Linux CentOS 7.2 Country: Costa Rica Processor architecture: Simulation nodes (32 nodes): Intel Xeon Phi KNL, 64 physical cores, 96 GB of main memory Data science nodes (5 nodes): Intel Xeon, 24 physical cores, 16-128 GB main memory Machine learning […]

    The post Kabré Supercomputer first appeared on RISC2 Project.

    ]]>
  • Title: Kabré Supercomputer
  • System name: Kabré Supercomputer
  • Location: National High Technology Center (CeNAT)
  • Web
  • OS: Linux CentOS 7.2
  • Country: Costa Rica
  • Processor architecture:
    • Simulation nodes (32 nodes): Intel Xeon Phi KNL, 64 physical cores, 96 GB of main memory
    • Data science nodes (5 nodes): Intel Xeon, 24 physical cores, 16-128 GB main memory
    • Machine learning nodes (8 nodes): Intel Xeon, 16 physical cores, 16-32 GB main memory, half the nodes with NVIDIA K40 GPU, half with NVIDIA V100 GPU
    • Bioinformatics nodes (7 nodes): Intel Xeon, 24 physical cores, 512-1024 GB main memory
    • Storage capacity: 120 TB
  • Access Policy: Restricted to students and staff of all public universities in Costa Rica
  • Main research domains: Full computational science spectrum, big data, artificial intelligence, bioinformatics
  • The post Kabré Supercomputer first appeared on RISC2 Project.

    ]]>
    Laboratorio de Computación Avanzada para Investigación https://www.risc2-project.eu/2022/04/22/laboratorio-de-computacion-avanzada-para-investigacion/ Fri, 22 Apr 2022 08:53:40 +0000 https://www.risc2-project.eu/?p=1880 Title: Laboratorio de Computación Avanzada para Investigación System name: Laboratorio de Computación Avanzada para Investigación Web Location: Universidad del Rosario (Bogotá) Country: Colombia Processor architecture: 8 Nodes Lenovo– Class 1 Node de 32 CPU cores and 4 Nvidia P100 by node- Class 2 Nodo de 32 CPU cores and 4 NVIDIA  amper A30 by node […]

    The post Laboratorio de Computación Avanzada para Investigación first appeared on RISC2 Project.

    ]]>
  • Title: Laboratorio de Computación Avanzada para Investigación
  • System name: Laboratorio de Computación Avanzada para Investigación
  • Web
  • Location: Universidad del Rosario (Bogotá)
  • Country: Colombia
  • Processor architecture:
    • 8 Nodes Lenovo– Class 1 Node de 32 CPU cores and 4 Nvidia P100 by node- Class 2 Nodo de 32 CPU cores and 4 NVIDIA  amper A30 by node

      180 TB shared storage

  • Vendor: DELL
  • Peak performance: 40 TFlops
  • Access policy
  • The post Laboratorio de Computación Avanzada para Investigación first appeared on RISC2 Project.

    ]]>