storage - RISC2 Project https://www.risc2-project.eu Tue, 31 Oct 2023 15:25:52 +0000 en-US hourly 1 https://wordpress.org/?v=6.6.2 International Conference for High Performance Computing, Networking, Storage, and Analysis https://www.risc2-project.eu/events/the-international-conference-for-high-performance-computing-networking-storage-and-analysis/ Tue, 31 Oct 2023 15:23:50 +0000 https://www.risc2-project.eu/?post_type=mec-events&p=3061

The post International Conference for High Performance Computing, Networking, Storage, and Analysis first appeared on RISC2 Project.

]]>

The post International Conference for High Performance Computing, Networking, Storage, and Analysis first appeared on RISC2 Project.

]]>
Hypatia https://www.risc2-project.eu/2023/06/11/hypatia/ Sun, 11 Jun 2023 09:05:30 +0000 https://www.risc2-project.eu/?p=2207 Title: Hypatia System name: Hypatia Location: Universidad de los Andes Colombia – Data Center (Bogotá) Web OS: Linux CentOS 7 Country: Colombia Processor architecture: Master Node: 1 PowerEdge R640 Server: 2 x Intel® Xeon® Silver 4210R 2.4G, 10C/20T, 9.6GT/s, 13.75M Cache, Turbo, HT (100W) DDR4-2400. Mellanox ConnectX-6 Single Port HDR100 QSFP56 Infiniband Adapter Compute Node: […]

The post Hypatia first appeared on RISC2 Project.

]]>
  • Title: Hypatia
  • System name: Hypatia
  • Location: Universidad de los Andes Colombia – Data Center (Bogotá)
  • Web
  • OS: Linux CentOS 7
  • Country: Colombia
  • Processor architecture:
    • Master Node: 1 PowerEdge R640 Server: 2 x Intel® Xeon® Silver 4210R 2.4G, 10C/20T, 9.6GT/s, 13.75M Cache, Turbo, HT (100W) DDR4-2400. Mellanox ConnectX-6 Single Port HDR100 QSFP56 Infiniband Adapter
    • Compute Node:  
      • 10 PowerEdge R640 Server: 2 x Intel® Xeon® Gold 6242R 3.1G, 20C/40T, 10.4GT/s, 27.5M Cache, Turbo, HT (205W) DDR4-2933. Mellanox ConnectX-6 Single Port HDR100 QSFP56 Infiniband Adapter
      • 3 PowerEdge R6525 Server 256 GB: 2 x AMD EPYC 7402 2.80GHz, 24C/48T, 128M Cache (180W) DDR4-3200. Mellanox ConnectX-6 Single Port HDR100 QSFP56 Infiniband Adapter
      • 2 PowerEdge R6525 Server 512 GB: 2 x AMD EPYC 7402 2.80GHz, 24C/48T, 128M Cache (180W) DDR4-3200. Mellanox ConnectX-6 Single Port HDR100 QSFP56 Infiniband Adapter
      • 1 PowerEdge R6525 Server 1 TB: 2 x AMD EPYC 7402 2.80GHz, 24C/48T, 128M Cache (180W) DDR4-3200. Mellanox ConnectX-6 Single Port HDR100 QSFP56 Infiniband Adapter
      • 2 PowerEdge R740 Server: 3 x NVIDIA® Quadro® RTX6000 24 GB, 250W, Dual Slot, PCle x16 Passice Cooled, Full Height GPU. Intel® Xeon® Gold 6226R 2.9GHz, 16C/32T, 10.4GT/s, 22M Cache, Turbo, HT (150W) DDR4-2933. Mellanox ConnectX-6 Single Port HDR100 QSFP56 Infiniband Adapter
    • Storage:
      • 1Dell EMC ME4084 SAS OST – 84 X 4TB HDD 7.2K 512n SAS12 3.5
      • 1 Dell EMC ME4024 SAS MDT – 24 X 960 GB SSD SAS Read Intensive 12Gbps 512e 2.5in Hot-plug Drive, PM5-R, 1DWPD, 1752 TBW
      • 4 PowerEdge R740 Server: 2 x Intel® Xeon® Gold 6230R 2.1G, 26C/52T, 10.4GT/s, 35.75M Cache, Turbo, HT 150W) DDR4-2933. Mellanox ConnectX-6 Single Port HDR100 QSFP56 Infiniband Adapter
  • Vendor: DELL
  • Peak performance: TBC
  • Access Policy: TBC
  • Main research domains: TBC
  • The post Hypatia first appeared on RISC2 Project.

    ]]>
    Developing Efficient Scientific Gateways for Bioinformatics in Supercomputer Environments Supported by Artificial Intelligence https://www.risc2-project.eu/2023/03/20/developing-efficient-scientific-gateways-for-bioinformatics-in-supercomputer-environments-supported-by-artificial-intelligence/ Mon, 20 Mar 2023 09:37:46 +0000 https://www.risc2-project.eu/?p=2781 Scientific gateways bring enormous benefits to end users by simplifying access and hiding the complexity of the underlying distributed computing infrastructure. Gateways require significant development and maintenance efforts. BioinfoPortal[1], through its CSGrid[2]  middleware, takes advantage of Santos Dumont [3] heterogeneous resources. However, task submission still requires a substantial step regarding deciding the best configuration that […]

    The post Developing Efficient Scientific Gateways for Bioinformatics in Supercomputer Environments Supported by Artificial Intelligence first appeared on RISC2 Project.

    ]]>
    Scientific gateways bring enormous benefits to end users by simplifying access and hiding the complexity of the underlying distributed computing infrastructure. Gateways require significant development and maintenance efforts. BioinfoPortal[1], through its CSGrid[2]  middleware, takes advantage of Santos Dumont [3] heterogeneous resources. However, task submission still requires a substantial step regarding deciding the best configuration that leads to efficient execution. This project aims to develop green and intelligent scientific gateways for BioinfoPortal supported by high-performance computing environments (HPC) and specialised technologies such as scientific workflows, data mining, machine learning, and deep learning. The efficient analysis and interpretation of Big Data opens new challenges to explore molecular biology, genetics, biomedical, and healthcare to improve personalised diagnostics and therapeutics; finding new avenues to deal with this massive amount of information becomes necessary. New Bioinformatics and Computational Biology paradigms drive storage, management, and data access. HPC and Big Data advanced in this domain represent a vast new field of opportunities for bioinformatics researchers and a significant challenge. the BioinfoPortal science gateway is a multiuser Brazilian infrastructure. We present several challenges for efficiently executing applications and discuss the findings on improving the use of computational resources. We performed several large-scale bioinformatics experiments that are considered computationally intensive and time-consuming. We are currently coupling artificial intelligence to generate models to analyze computational and bioinformatics metadata to understand how automatic learning can predict computational resources’ efficient use. The computational executions are conducted at Santos Dumont, the largest supercomputer in Latin America, dedicated to the research community with 5.1 Petaflops and 36,472 computational cores distributed in 1,134 computational nodes.

    By:

    Carneiro, B. Fagundes, C. Osthoff, G. Freire, K. Ocaña, L. Cruz, L. Gadelha, M. Coelho, M. Galheigo, and R. Terra are with the National Laboratory of Scientific Computing, Rio de Janeiro, Brazil.

    Carvalho is with the Federal Center for Technological Education Celso Suckow da Fonseca, Rio de Janeiro, Brazil.

    Douglas Cardoso is with the Polytechnic Institute of Tomar, Portugal.

    Boito and L, Teylo is with the University of Bordeaux, CNRS, Bordeaux INP, INRIA, LaBRI, Talence, France.

    Navaux is with the Informatics Institute, the Federal University of Rio Grande do Sul, and Rio Grande do Sul, Brazil.

    References:

    Ocaña, K. A. C. S.; Galheigo, M.; Osthoff, C.; Gadelha, L. M. R.; Porto, F.; Gomes, A. T. A.; Oliveira, D.; Vasconcelos, A. T. BioinfoPortal: A scientific gateway for integrating bioinformatics applications on the Brazilian national high-performance computing network. Future Generation Computer Systems, v. 107, p. 192-214, 2020.

    Mondelli, M. L.; Magalhães, T.; Loss, G.; Wilde, M.; Foster, I.; Mattoso, M. L. Q.; Katz, D. S.; Barbosa, H. J. C.; Vasconcelos, A. T. R.; Ocaña, K. A. C. S; Gadelha, L. BioWorkbench: A High-Performance Framework for Managing and Analyzing Bioinformatics Experiments. PeerJ, v. 1, p. 1, 2018.

    Coelho, M.; Freire, G.; Ocaña, K.; Osthoff, C.; Galheigo, M.; Carneiro, A. R.; Boito, F.; Navaux, P.; Cardoso, D. O. Desenvolvimento de um Framework de Aprendizado de Máquina no Apoio a Gateways Científicos Verdes, Inteligentes e Eficientes: BioinfoPortal como Caso de Estudo Brasileiro In: XXIII Simpósio em Sistemas Computacionais de Alto Desempenho – WSCAD 2022 (https://wscad.ufsc.br/), 2022.

    Terra, R.; Ocaña, K.; Osthoff, C.; Cruz, L.; Boito, F.; Navaux, P.; Carvalho, D. Framework para a Construção de Redes Filogenéticas em Ambiente de Computação de Alto Desempenho. In: XXIII Simpósio em Sistemas Computacionais de Alto Desempenho – WSCAD 2022 (https://wscad.ufsc.br/), 2022.

    Ocaña, K.; Cruz, L.; Coelho, M.; Terra, R.; Galheigo, M.; Carneiro, A.; Carvalho, D.; Gadelha, L.; Boito, F.; Navaux, P.; Osthoff, C. ParslRNA-Seq: an efficient and scalable RNAseq analysis workflow for studies of differentiated gene expression. In: Latin America High-Performance Computing Conference (CARLA), 2022, Rio Grande do Sul, Brazil. Proceedings of the Latin American High-Performance Computing Conference – CARLA 2022 (http://www.carla22.org/), 2022.

    [1] https://bioinfo.lncc.br/

    [2] https://git.tecgraf.puc-rio.br/csbase-dev/csgrid/-/tree/CSGRID-2.3-LNCC

    [3] https://https://sdumont.lncc.br

    The post Developing Efficient Scientific Gateways for Bioinformatics in Supercomputer Environments Supported by Artificial Intelligence first appeared on RISC2 Project.

    ]]>
    JUPITER Ascending – First European Exascale Supercomputer Coming to Jülich https://www.risc2-project.eu/2023/01/02/jupiter-ascending-first-european-exascale-supercomputer-coming-to-julich/ Mon, 02 Jan 2023 12:14:22 +0000 https://www.risc2-project.eu/?p=2637 It was finally decided in 2022: Forschungszentrum Jülich will be home to Europe’s first exascale computer. The supercomputer is set to be the first in Europe to surpass the threshold of one trillion (“1” followed by 18 zeros) calculations per second. The system will be acquired by the European supercomputing initiative EuroHPC JU. The exascale computer […]

    The post JUPITER Ascending – First European Exascale Supercomputer Coming to Jülich first appeared on RISC2 Project.

    ]]>
    It was finally decided in 2022: Forschungszentrum Jülich will be home to Europe’s first exascale computer. The supercomputer is set to be the first in Europe to surpass the threshold of one trillion (“1” followed by 18 zeros) calculations per second. The system will be acquired by the European supercomputing initiative EuroHPC JU. The exascale computer should help to solve important and urgent scientific questions regarding, for example, climate change, how to combat pandemics, and sustainable energy production, while also enabling the intensive use of artificial intelligence and the analysis of large data volumes. The overall costs for the system amount to 500 million euros. Of this total, 250 million euros is being provided by EuroHPC JU and a further 250 million euros in equal parts by the German Federal Ministry of Education and Research (BMBF) and the Ministry of Culture and Science of the State of North Rhine-Westphalia (MKW NRW).

    The computer named JUPITER (short for “Joint Undertaking Pioneer for Innovative and Transformative Exascale Research”) will be installed 2023/2024 on the campus of Forschungszentrum Jülich. It is intended that the system will be operated by the Jülich Supercomputing Centre (JSC), whose supercomputers JUWELS and JURECA currently rank among the most powerful in the world. JSC has participated in the application procedure for a high-end supercomputer as a member of the Gauss Centre for Supercomputing (GCS), an association of the three German national supercomputing centres JSC in Jülich, High Performance Computing Stuttgart (HLRS), and Leibniz Computing Centre (LRZ) in Garching. The competition was organized by the European supercomputing initiative EuroHPC JU, which was formed by the European Union together with European countries and private companies. 

    JUPITER is now set to become the first European supercomputer to make the leap into the exascale class. In terms of computing power, it will be more powerful that 5 million modern laptops of PCs. Just like Jülich’s current supercomputer JUWELS, JUPITER will be based on a dynamic, modular supercomputing architecture, which Forschungszentrum Jülich developed together with European and international partners in the EU’s DEEP research projects.

    In a modular supercomputer, various computing modules are coupled together. This enables program parts of complex simulations to be distributed over several modules, ensuring that the various hardware properties can be optimally utilized in each case. Its modular construction also means that the system is well prepared for integrating future technologies such as quantum computing or neurotrophic modules, which emulate the neural structure of a biological brain.

    Figure Modular Supercomputing Architecture: Computing and storage modules of the exascale computer in its basis configuration (blue) as well as optional modules (green) and modules for future technologies (purple) as possible extensions. 

    In its basis configuration, JUPITER will have and enormously powerful booster module with highly efficient GPU-based computation accelerators. Massively parallel applications are accelerated by this booster in a similar way to a turbocharger, for example to calculate high-resolution climate models, develop new materials, simulate complex cell processes and energy systems, advanced basic research, or train next-generation, computationally intensive machine-learning algorithms.

    One major challenge is the energy that is required for such large computing power. The average power is anticipated to be up to 15 megawatts. JUPITER has been designed as a “green” supercomputer and will be powered by green electricity. The envisaged warm water cooling system should help to ensure that JUPITER achieves the highest efficiency values. At the same time, the cooling technology opens up the possibility of intelligently using the waste heat  that is produced. For example, just like its predecessor system JUWELS, JUPITER will be connected to the new low-temperature network on the Forschungszentrum Jülich campus. Further potential applications for the waste heat from JUPITER are currently being investigated by Forschungszentrum Jülich.

    By Jülich Supercomputing Centre (JSC)

     

    The first image is JUWELS: Germany’s fastest supercomputer JUWELS at Forschungszentrum Jülich, which is funded in equal parts by the Federal Ministry of Education and Research (BMBF) and the Ministry of Culture and Science of the State of North Rhine-Westphalia (MKW NRW) via the Gauss Centre for Supercomputing (GCS). (Copyright: Forschungszentrum Jülich / Sascha Kreklau)

    The post JUPITER Ascending – First European Exascale Supercomputer Coming to Jülich first appeared on RISC2 Project.

    ]]>
    RISC2 attended the Supercomputing Conference 2022 https://www.risc2-project.eu/2022/11/22/risc2-attended-the-supercomputing-conference-2022/ Tue, 22 Nov 2022 12:12:24 +0000 https://www.risc2-project.eu/?p=2594 The RISC2 team participated in the Supercomputing Conference 2022, in Dallas, Texas. The International Conference for High Performance Computing, Networking, Storage, and Analysis, which took place between November 13 an 18, was a great opportunity for networking and to foster collaboration. Our partners Carlos Barrios Hernandez (from Industrial University of Santander) and Esteban Meneses (from […]

    The post RISC2 attended the Supercomputing Conference 2022 first appeared on RISC2 Project.

    ]]>
    The RISC2 team participated in the Supercomputing Conference 2022, in Dallas, Texas. The International Conference for High Performance Computing, Networking, Storage, and Analysis, which took place between November 13 an 18, was a great opportunity for networking and to foster collaboration.

    Our partners Carlos Barrios Hernandez (from Industrial University of Santander) and Esteban Meneses (from Costa Rica National High Technology Center) participated directly on the “Americas HPC Collaboration” session, on November 16, which aimed to showcase opportunities and experiences between different HPC networks. On this session, Philippe Navaux (from UFRGS and SCALAC) presented the RISC2 project.

    It was also during the conference that RISC2 was honoured with the HPCwire Editors’ Choice Award for “Best HPC Collaboration (Academia/Government/Industry)” 2022.

    The partners representing RISC2 were Fabrizio Gagliardi (from Barcelona Supercomputing Center), Rui Oliveira (from INESC TEC), Bernd Mohr (from Jülich Supercomputing Centre), Carlos Barrios Hernandez (from Industrial University of Santander), Esteban Meneses (from Costa Rica National High Technology Center), Pedro Alberto (From University of Coimbra), and Philippe Navaux (from UFRGS and SCALAC).

    The post RISC2 attended the Supercomputing Conference 2022 first appeared on RISC2 Project.

    ]]>
    RISC2 receives honors in 2022 HPCwire Readers’ and Editors’ Choice Awards https://www.risc2-project.eu/2022/11/17/risc2-receives-honors-in-2022-hpcwire-readers-and-editors-choice-awards/ Thu, 17 Nov 2022 11:17:39 +0000 https://www.risc2-project.eu/?p=2566 The RISC2 project has been recognised in the annual HPCwire Readers’ and Editors’ Choice Awards, presented at the 2022 International Conference for High Performance Computing, Networking, Storage, and Analysis (SC22), in Dallas, Texas. RISC2 was selected for the Best HPC Collaboration (Academia/Government/Industry). Fabrizio Gagliardi, director of the RISC2, says “I feel particularly honored by this […]

    The post RISC2 receives honors in 2022 HPCwire Readers’ and Editors’ Choice Awards first appeared on RISC2 Project.

    ]]>
    The RISC2 project has been recognised in the annual HPCwire Readers’ and Editors’ Choice Awards, presented at the 2022 International Conference for High Performance Computing, Networking, Storage, and Analysis (SC22), in Dallas, Texas. RISC2 was selected for the Best HPC Collaboration (Academia/Government/Industry).

    Fabrizio Gagliardi, director of the RISC2, says “I feel particularly honored by this recognition on behalf of all the project members who have worked so hard to achieve in such a short time, and with limited resources, a considerable impact in promoting HPC activities in Latin America in collaboration with Europe”.

    Editors’ Choice: Best HPC Collaboration (Academia/Government/Industry)

    The RISC2 project, following the RISC2 Project, aims to promote and improve the relationship between research and industrial communities, focusing on HPC application and infrastructure deployment, between Europe and Latin America. Led by the Barcelona Supercomputing Center (BSC), RISC2 brings together 16 partners from 12 different countries.

    About the HPCwire Readers’ and Editors’ Choice Awards

    The list of winners was revealed at the SC22 HPCwire booth and on the HPCwire website.

    The coveted annual HPCwire Readers’ and Editors’ Choice Awards are determined through a nomination and voting process with the global HPCwire community, a well a selections from the HPCwire editors. The awards are an annual feature of the publication and constitute prestigious recognition from the HPC community . They are revealed each year too kick off the annual supercomputing conference, which showcases high performance computing, networking, storage, and data analysis.

    “The 2022 Readers’ and Editors’ Choice Awards are exceptional, indeed. Solutions developed with HPC led the world out of the Pandemic, and we officially broke the Exascale threshold – HPC has now reached a billion, billion operations per second!” said Tom Tabor, CEO of Tabor Communications, publishers of HPCwire. “Between our worldwide readership of HPC experts and the most renowned panel of editors in the industry, the Readers’ and Editors’ Choice Awards represent resounding recognition of HPC accomplishments throughout the world. Our sincerest gratitude and hearty congratulations go out to all of the winners.”

    The post RISC2 receives honors in 2022 HPCwire Readers’ and Editors’ Choice Awards first appeared on RISC2 Project.

    ]]>
    SC’22 https://www.risc2-project.eu/events/sc22/ Wed, 09 Nov 2022 11:15:40 +0000 https://www.risc2-project.eu/?post_type=mec-events&p=2543

    The post SC’22 first appeared on RISC2 Project.

    ]]>

    The post SC’22 first appeared on RISC2 Project.

    ]]>
    HPC meets AI and Big Data https://www.risc2-project.eu/2022/10/06/hpc-meets-ai-and-big-data/ Thu, 06 Oct 2022 08:23:34 +0000 https://www.risc2-project.eu/?p=2413 HPC services are no longer solely targeted at highly parallel modelling and simulation tasks. Indeed, the computational power offered by these services is now being used to support data-centric Big Data and Artificial Intelligence (AI) applications. By combining both types of computational paradigms, HPC infrastructures will be key for improving the lives of citizens, speeding […]

    The post HPC meets AI and Big Data first appeared on RISC2 Project.

    ]]>
    HPC services are no longer solely targeted at highly parallel modelling and simulation tasks. Indeed, the computational power offered by these services is now being used to support data-centric Big Data and Artificial Intelligence (AI) applications. By combining both types of computational paradigms, HPC infrastructures will be key for improving the lives of citizens, speeding up scientific breakthrough in different fields (e.g., health, IoT, biology, chemistry, physics), and increasing the competitiveness of companies [OG+15, NCR+18].

    As the utility and usage of HPC infrastructures increases, more computational and storage power is required to efficiently handle the amount of targeted applications. In fact, many HPC centers are now aiming at exascale supercomputers supporting at least one exaFLOPs (1018 operations per second), which represents a thousandfold increase in processing power over the first petascale computer deployed in 2008 [RD+15]. Although this is a necessary requirement for handling the increasing number of HPC applications, there are several outstanding challenges that still need to be tackled so that this extra computational power can be fully leveraged. 

    Management of large infrastructures and heterogeneous workloads: By adding more compute and storage nodes, one is also increasing the complexity of the overall HPC distributed infrastructure and making it harder to monitor and manage. This complexity is increased due to the need of supporting highly heterogeneous applications that translate into different workloads with specific data storage and processing needs [ECS+17]. For example, on the one hand, traditional scientific modeling and simulation tasks require large slices of computational time, are CPU-bound, and rely on iterative approaches (parametric/stochastic modeling). On the other hand, data-driven Big Data applications contemplate shorter computational tasks, that are I/O bound and, in some cases, have real-time response requirements (i.e., latency-oriented). Also, many of the applications leverage AI and machine learning tools that require specific hardware (e.g., GPUs) in order to be efficient.

    Support for general-purpose analytics: The increased heterogeneity also demands that HPC infrastructures are now able to support general-purpose AI and BigData applications that were not designed explicitly to run on specialised HPC hardware [KWG+13]. Therefore, developers are not required to significantly change their applications so that they can execute efficiently at HPC clusters.

    Avoiding the storage bottleneck: By only increasing the computational power and improving the management of HPC infrastructures it may still not be possible to fully harmed the capabilities of these infrastructures. In fact, Big Data and AI applications are data-driven and require efficient data storage and retrieval from HPC clusters. With an increasing number of applications and heterogeneous workloads, the storage systems supporting HPC may easily become a bottleneck [YDI+16, ECS+17]. Indeed, as pointed out by several studies, the storage access time is one of the major bottlenecks limiting the efficiency of current and next-generation HPC infrastructures. 

    In order to address these challenges, RISC2 partners are exploring: New monitoring and debugging tools that can aid in the analysis of complex AI and Big Data workloads in order to pinpoint potential performance and efficiency bottlenecks, while helping system administrators and developers on troubleshooting these [ENO+21].

    Emerging virtualization technologies, such as containers, that enable users to efficiently deploy and execute traditional AI and BigData applications in an HPC environment, without requiring any changes to their source-code [FMP21].  

    The Software-Defined Storage paradigm in order to improve the Quality-of-Service (QoS) for HPC’s storage services when supporting hundreds to thousands of data-intensive AI and Big Data applications [DLC+22, MTH+22].  

    To sum up, these three research goals, and respective contributions, will enable the next generation of HPC infrastructures and services that can efficiently meet the demands of Big Data and AI workloads. 

     

    References

    [DLC+22] Dantas, M., Leitão, D., Cui, P., Macedo, R., Liu, X., Xu, W., Paulo, J., 2022. Accelerating Deep Learning Training Through Transparent Storage Tiering. IEEE/ACM International Symposium on Cluster, Cloud and Internet Computing (CCGrid)  

    [ECS+17] Joseph, E., Conway, S., Sorensen, B., Thorp, M., 2017. Trends in the Worldwide HPC Market (Hyperion Presentation). HPC User Forum at HLRS.  

    [FMP21] Faria, A., Macedo, R., Paulo, J., 2021. Pods-as-Volumes: Effortlessly Integrating Storage Systems and Middleware into Kubernetes. Workshop on Container Technologies and Container Clouds (WoC’21). 

    [KWG+13] Katal, A., Wazid, M. and Goudar, R.H., 2013. Big data: issues, challenges, tools and good practices. International conference on contemporary computing (IC3). 

    [NCR+18] Netto, M.A., Calheiros, R.N., Rodrigues, E.R., Cunha, R.L. and Buyya, R., 2018. HPC cloud for scientific and business applications: Taxonomy, vision, and research challenges. ACM Computing Surveys (CSUR). 

    [MTH+22] Macedo, R., Tanimura, Y., Haga, J., Chidambaram, V., Pereira, J., Paulo, J., 2022. PAIO: General, Portable I/O Optimizations With Minor Application Modifications. USENIX Conference on File and Storage Technologies (FAST). 

    [OG+15] Osseyran, A. and Giles, M. eds., 2015. Industrial applications of high-performance computing: best global practices. 

    [RD+15] Reed, D.A. and Dongarra, J., 2015. Exascale computing and big data. Communications of the ACM. 

    [ENO+21] Esteves, T., Neves, F., Oliveira, R., Paulo, J., 2021. CaT: Content-aware Tracing and Analysis for Distributed Systems. ACM/IFIP Middleware conference (Middleware). 

    [YDI+16] Yildiz, O., Dorier, M., Ibrahim, S., Ross, R. and Antoniu, G., 2016, May. On the root causes of cross-application I/O interference in HPC storage systems. IEEE International Parallel and Distributed Processing Symposium (IPDPS). 

     

    By INESC TEC

    The post HPC meets AI and Big Data first appeared on RISC2 Project.

    ]]>
    THUBAT KAAL II https://www.risc2-project.eu/2022/04/22/thubat-kaal-ii/ Fri, 22 Apr 2022 09:39:18 +0000 https://www.risc2-project.eu/?p=1902 Title: THUBAT KAAL II System name: THUBAT KAAL II Location: Centro Nacional de Supercómputo (CNS) Web OS: RedHat. 7.3 Country: Mexico Processor architecture: 86 nodes: 82 INTEL XEON X86 BITS SKYLAKE & 4 nodes with 4 cards NVIDIA p100 each Workload manager: SLURM Storage system: Lustre High availability 1.7 PB. Network: Infiniband EDR 100GBPS – […]

    The post THUBAT KAAL II first appeared on RISC2 Project.

    ]]>
  • Title: THUBAT KAAL II
  • System name: THUBAT KAAL II
  • Location: Centro Nacional de Supercómputo (CNS)
  • Web
  • OS: RedHat. 7.3
  • Country: Mexico
  • Processor architecture:
    • 86 nodes: 82 INTEL XEON X86 BITS SKYLAKE & 4 nodes with 4 cards NVIDIA p100 each
    • Workload manager: SLURM
    • Storage system: Lustre High availability 1.7 PB.
    • Network: Infiniband EDR 100GBPS – All to All topology
    • Manufacture: ATOS Bull
    • Peak performance:
      • 257 Tflops
    • Access Policy

    The post THUBAT KAAL II first appeared on RISC2 Project.

    ]]>
    National University (UNA) https://www.risc2-project.eu/2022/04/22/national-university-una/ Fri, 22 Apr 2022 09:21:15 +0000 https://www.risc2-project.eu/?p=1890 Title: National University (UNA) System name: National University (UNA) Location: National University (UNA) – School of Physics Web OS: Linux CentOS 6.7 Country: Costa Rica Processor architecture: Head nodes (1 node): Intel Xeon, 20 physical cores, 64 GB main memory. Computing nodes (11 nodes): Intel Xeon, 20 physical cores, 64 GB main memory. Storage capacity: […]

    The post National University (UNA) first appeared on RISC2 Project.

    ]]>
  • Title: National University (UNA)
  • System name: National University (UNA)
  • Location: National University (UNA) – School of Physics
  • Web
  • OS: Linux CentOS 6.7
  • Country: Costa Rica
  • Processor architecture:
    • Head nodes (1 node): Intel Xeon, 20 physical cores, 64 GB main memory.
    • Computing nodes (11 nodes): Intel Xeon, 20 physical cores, 64 GB main memory.
    • Storage capacity: 12 TB
  • Access Policy: Restricted to students and staff of the National University
  • Main research domains: Computational physics, bioinformatics
  • The post National University (UNA) first appeared on RISC2 Project.

    ]]>