analysis - RISC2 Project https://www.risc2-project.eu Tue, 31 Oct 2023 15:25:52 +0000 en-US hourly 1 https://wordpress.org/?v=6.6.2 International Conference for High Performance Computing, Networking, Storage, and Analysis https://www.risc2-project.eu/events/the-international-conference-for-high-performance-computing-networking-storage-and-analysis/ Tue, 31 Oct 2023 15:23:50 +0000 https://www.risc2-project.eu/?post_type=mec-events&p=3061

The post International Conference for High Performance Computing, Networking, Storage, and Analysis first appeared on RISC2 Project.

]]>

The post International Conference for High Performance Computing, Networking, Storage, and Analysis first appeared on RISC2 Project.

]]>
Scientific Machine Learning and HPC https://www.risc2-project.eu/2023/06/28/scientific-machine-learning-and-hpc/ Wed, 28 Jun 2023 08:24:28 +0000 https://www.risc2-project.eu/?p=2863 In recent years we have seen rapid growth in interest in artificial intelligence in general, and machine learning (ML) techniques, particularly in different branches of science and engineering. The rapid growth of the Scientific Machine Learning field derives from the combined development and use of efficient data analysis algorithms, the availability of data from scientific […]

The post Scientific Machine Learning and HPC first appeared on RISC2 Project.

]]>
In recent years we have seen rapid growth in interest in artificial intelligence in general, and machine learning (ML) techniques, particularly in different branches of science and engineering. The rapid growth of the Scientific Machine Learning field derives from the combined development and use of efficient data analysis algorithms, the availability of data from scientific instruments and computer simulations, and advances in high-performance computing. On May 25 2023, COPPE/UFRJ organized a forum to discuss Artificial Intelligence developments and its impact on the society [*].

As the coordinator of the High Performance Computing Center (Nacad) at COPPE/UFRJ, Alvaro Coutinho, presented advances in AI in Engineering and the importance of multidisciplinary research networks to address current issues in Scientific Machine Learning. Alvaro took the opportunity to highlight the need for Brazil to invest in high performance computing capacity.

The country’s sovereignty needs autonomy in producing ML advances, which depends on HPC support at the Universities and Research Centers. Brazil has nine machines in the Top 500 list of the most powerful computer systems in the world, but almost all at Petrobras company, and Universities need much more. ML is well-known to require HPC, when combined to scientific computer simulations it becomes essential.

The conventional notion of ML involves training an algorithm to automatically discover patterns, signals, or structures that may be hidden in huge databases and whose exact nature is unknown and therefore cannot be explicitly programmed. This field may face two major drawbacks: the need for a significant volume of (labelled) expensive to acquire data and limitations for extrapolating (making predictions beyond scenarios contained in the trained data difficult).

Considering that an algorithm’s predictive ability is a learning skill, current challenges must be addressed to improve the analytical and predictive capacity of Scientific ML algorithms, for example, to maximize its impact in applications of renewable energy. References [1-5] illustrate recent advances in Scientific Machine Learning in different areas of engineering and computer science.

References:

[*] https://www.coppe.ufrj.br/pt-br/planeta-coppe-noticias/noticias/coppe-e-sociedade-especialistas-debatem-os-reflexos-da-inteligencia

[1] Baker, Nathan, Steven L. Brunton, J. Nathan Kutz, Krithika Manohar, Aleksandr Y. Aravkin, Kristi Morgansen, Jennifer Klemisch, Nicholas Goebel, James Buttrick, Jeffrey Poskin, Agnes Blom-Schieber, Thomas Hogan, Darren McDonaldAlexander, Frank, Bremer, Timo, Hagberg, Aric, Kevrekidis, Yannis, Najm, Habib, Parashar, Manish, Patra, Abani, Sethian, James, Wild, Stefan, Willcox, Karen, and Lee, Steven. Workshop Report on Basic Research Needs for Scientific Machine Learning: Core Technologies for Artificial Intelligence. United States: N. p., 2019. Web. doi:10.2172/1478744.

[2] Brunton, Steven L., Bernd R. Noack, and Petros Koumoutsakos. “Machine learning for fluid mechanics.” Annual Review of Fluid Mechanics 52 (2020): 477-508.

[3] Karniadakis, George Em, et al. “Physics-informed machine learning.” Nature Reviews Physics 3.6 (2021): 422-440.

[4] Inria White Book on Artificial Intelligence: Current challenges and Inria’s engagement, 2nd edition, 2021. URL: https://www.inria.fr/en/white-paper-inria-artificial-intelligence

[5] Silva, Romulo, Umair bin Waheed, Alvaro Coutinho, and George Em Karniadakis. “Improving PINN-based Seismic Tomography by Respecting Physical Causality.” In AGU Fall Meeting Abstracts, vol. 2022, pp. S11C-09. 2022.

The post Scientific Machine Learning and HPC first appeared on RISC2 Project.

]]>
Laboratorio de Supercómputo https://www.risc2-project.eu/2023/06/12/laboratorio-de-supercomputo/ Mon, 12 Jun 2023 14:07:30 +0000 https://www.risc2-project.eu/?p=2873 System name: Laboratorio de Supercómputo Location: Facultad de Ciencias, Universidad Autónoma del Estado de México Areas: Analysis, modelling and simulation of complex systems Web

The post Laboratorio de Supercómputo first appeared on RISC2 Project.

]]>
  • System name: Laboratorio de Supercómputo
  • Location: Facultad de Ciencias, Universidad Autónoma del Estado de México
  • Areas: Analysis, modelling and simulation of complex systems
  • Web
  • The post Laboratorio de Supercómputo first appeared on RISC2 Project.

    ]]>
    Latin American researchers present greener gateways for Big Data in INRIA Brazil Workshop https://www.risc2-project.eu/2023/05/03/latin-american-researchers-present-greener-gateways-for-big-data-in-inria-brazil-workshop/ Wed, 03 May 2023 13:29:03 +0000 https://www.risc2-project.eu/?p=2802 In the scope of the RISC2 Project, the State University of Sao Paulo and INRIA (Institut National de Recherche en Informatique et en Automatique), a renowned French research institute, held a workshop, on  that set the stage for the presentation of the results accomplished under the work Developing Efficient Scientific Gateways for Bioinformatics in Supercomputer […]

    The post Latin American researchers present greener gateways for Big Data in INRIA Brazil Workshop first appeared on RISC2 Project.

    ]]>
    In the scope of the RISC2 Project, the State University of Sao Paulo and INRIA (Institut National de Recherche en Informatique et en Automatique), a renowned French research institute, held a workshop, on  that set the stage for the presentation of the results accomplished under the work Developing Efficient Scientific Gateways for Bioinformatics in Supercomputer Environments Supported by Artificial Intelligence.

    The goal of the investigation is to provide users with simplified access to computing structures through scientific solutions that represent significant developments in their fields. In the case of this project, it is intended to develop intelligent green scientific solutions for BioinfoPortal (a multiuser Brazilian infrastructure)supported by High-Performance Computing environments.

    Technologically, it includes areas such as scientific workflows, data mining, machine learning, and deep learning. The outlook, in case of success, is the analysis and interpretation of Big Data allowing new paths in molecular biology, genetics, biomedicine, and health— so it becomes necessary tools capable of digesting the amount of information, efficiently, which can come.

    The team performed several large-scale bioinformatics experiments that are considered to be computationally intensive. Currently, artificial intelligence is being used to generate models to analyze computational and bioinformatics metadata to understand how automatic learning can predict computational resources efficiently. The workshop was held from April 10th to 11th, and took place in the University of Sao Paulo.

    RISC2 Project, which aims to explore the HPC impact in the economies of Latin America and Europe, relies on the interaction between researchers and policymakers in both regions. It also includes 16 academic partners such as the University of Buenos Aires, National Laboratory for High Performance Computing of Chile, Julich Supercomputing Centre, Barcelona Supercomputing Center (the leader of the consortium), among others.

    The post Latin American researchers present greener gateways for Big Data in INRIA Brazil Workshop first appeared on RISC2 Project.

    ]]>
    Developing Efficient Scientific Gateways for Bioinformatics in Supercomputer Environments Supported by Artificial Intelligence https://www.risc2-project.eu/2023/03/20/developing-efficient-scientific-gateways-for-bioinformatics-in-supercomputer-environments-supported-by-artificial-intelligence/ Mon, 20 Mar 2023 09:37:46 +0000 https://www.risc2-project.eu/?p=2781 Scientific gateways bring enormous benefits to end users by simplifying access and hiding the complexity of the underlying distributed computing infrastructure. Gateways require significant development and maintenance efforts. BioinfoPortal[1], through its CSGrid[2]  middleware, takes advantage of Santos Dumont [3] heterogeneous resources. However, task submission still requires a substantial step regarding deciding the best configuration that […]

    The post Developing Efficient Scientific Gateways for Bioinformatics in Supercomputer Environments Supported by Artificial Intelligence first appeared on RISC2 Project.

    ]]>
    Scientific gateways bring enormous benefits to end users by simplifying access and hiding the complexity of the underlying distributed computing infrastructure. Gateways require significant development and maintenance efforts. BioinfoPortal[1], through its CSGrid[2]  middleware, takes advantage of Santos Dumont [3] heterogeneous resources. However, task submission still requires a substantial step regarding deciding the best configuration that leads to efficient execution. This project aims to develop green and intelligent scientific gateways for BioinfoPortal supported by high-performance computing environments (HPC) and specialised technologies such as scientific workflows, data mining, machine learning, and deep learning. The efficient analysis and interpretation of Big Data opens new challenges to explore molecular biology, genetics, biomedical, and healthcare to improve personalised diagnostics and therapeutics; finding new avenues to deal with this massive amount of information becomes necessary. New Bioinformatics and Computational Biology paradigms drive storage, management, and data access. HPC and Big Data advanced in this domain represent a vast new field of opportunities for bioinformatics researchers and a significant challenge. the BioinfoPortal science gateway is a multiuser Brazilian infrastructure. We present several challenges for efficiently executing applications and discuss the findings on improving the use of computational resources. We performed several large-scale bioinformatics experiments that are considered computationally intensive and time-consuming. We are currently coupling artificial intelligence to generate models to analyze computational and bioinformatics metadata to understand how automatic learning can predict computational resources’ efficient use. The computational executions are conducted at Santos Dumont, the largest supercomputer in Latin America, dedicated to the research community with 5.1 Petaflops and 36,472 computational cores distributed in 1,134 computational nodes.

    By:

    Carneiro, B. Fagundes, C. Osthoff, G. Freire, K. Ocaña, L. Cruz, L. Gadelha, M. Coelho, M. Galheigo, and R. Terra are with the National Laboratory of Scientific Computing, Rio de Janeiro, Brazil.

    Carvalho is with the Federal Center for Technological Education Celso Suckow da Fonseca, Rio de Janeiro, Brazil.

    Douglas Cardoso is with the Polytechnic Institute of Tomar, Portugal.

    Boito and L, Teylo is with the University of Bordeaux, CNRS, Bordeaux INP, INRIA, LaBRI, Talence, France.

    Navaux is with the Informatics Institute, the Federal University of Rio Grande do Sul, and Rio Grande do Sul, Brazil.

    References:

    Ocaña, K. A. C. S.; Galheigo, M.; Osthoff, C.; Gadelha, L. M. R.; Porto, F.; Gomes, A. T. A.; Oliveira, D.; Vasconcelos, A. T. BioinfoPortal: A scientific gateway for integrating bioinformatics applications on the Brazilian national high-performance computing network. Future Generation Computer Systems, v. 107, p. 192-214, 2020.

    Mondelli, M. L.; Magalhães, T.; Loss, G.; Wilde, M.; Foster, I.; Mattoso, M. L. Q.; Katz, D. S.; Barbosa, H. J. C.; Vasconcelos, A. T. R.; Ocaña, K. A. C. S; Gadelha, L. BioWorkbench: A High-Performance Framework for Managing and Analyzing Bioinformatics Experiments. PeerJ, v. 1, p. 1, 2018.

    Coelho, M.; Freire, G.; Ocaña, K.; Osthoff, C.; Galheigo, M.; Carneiro, A. R.; Boito, F.; Navaux, P.; Cardoso, D. O. Desenvolvimento de um Framework de Aprendizado de Máquina no Apoio a Gateways Científicos Verdes, Inteligentes e Eficientes: BioinfoPortal como Caso de Estudo Brasileiro In: XXIII Simpósio em Sistemas Computacionais de Alto Desempenho – WSCAD 2022 (https://wscad.ufsc.br/), 2022.

    Terra, R.; Ocaña, K.; Osthoff, C.; Cruz, L.; Boito, F.; Navaux, P.; Carvalho, D. Framework para a Construção de Redes Filogenéticas em Ambiente de Computação de Alto Desempenho. In: XXIII Simpósio em Sistemas Computacionais de Alto Desempenho – WSCAD 2022 (https://wscad.ufsc.br/), 2022.

    Ocaña, K.; Cruz, L.; Coelho, M.; Terra, R.; Galheigo, M.; Carneiro, A.; Carvalho, D.; Gadelha, L.; Boito, F.; Navaux, P.; Osthoff, C. ParslRNA-Seq: an efficient and scalable RNAseq analysis workflow for studies of differentiated gene expression. In: Latin America High-Performance Computing Conference (CARLA), 2022, Rio Grande do Sul, Brazil. Proceedings of the Latin American High-Performance Computing Conference – CARLA 2022 (http://www.carla22.org/), 2022.

    [1] https://bioinfo.lncc.br/

    [2] https://git.tecgraf.puc-rio.br/csbase-dev/csgrid/-/tree/CSGRID-2.3-LNCC

    [3] https://https://sdumont.lncc.br

    The post Developing Efficient Scientific Gateways for Bioinformatics in Supercomputer Environments Supported by Artificial Intelligence first appeared on RISC2 Project.

    ]]>
    Costa Rica HPC School 2023 aimed at teaching the fundamental tools and methodologies in parallel programming https://www.risc2-project.eu/2023/02/14/costa-rica-hpc-school-2023-aimed-at-teaching-the-fundamental-tools-and-methodologies-in-parallel-programming/ Tue, 14 Feb 2023 10:05:55 +0000 https://www.risc2-project.eu/?p=2736 The Costa Rica HPC School 2023, organized by CeNAT in collaboration with the RISC2 project, took place between January 30 and February 3, at the Costa Rica National High Technology Center. The main goal of the School was to offer a platform for learning the fundamental tools and methodologies in parallel programming. In doing so […]

    The post Costa Rica HPC School 2023 aimed at teaching the fundamental tools and methodologies in parallel programming first appeared on RISC2 Project.

    ]]>
    The Costa Rica HPC School 2023, organized by CeNAT in collaboration with the RISC2 project, took place between January 30 and February 3, at the Costa Rica National High Technology Center. The main goal of the School was to offer a platform for learning the fundamental tools and methodologies in parallel programming. In doing so in an in-person mode, networking and team building was also fostered. The School gathered 32 attendees, mostly students, but also professors and researchers.

    Building on the success of previous editions, the seventh installment of the Costa Rica High Performance Computing School (CRHPCS) aims at preparing students and researchers to introduce HPC tools in their workflows. A selected team of international experts taught sessions on shared-memory programming, distributed-memory programming, accelerator programming, and high performance computing.  This edition had instructors Alessandro Marani and Nitin Shukla from CINECA, which greatly helped in bringing a vibrant environment to the sessions.

    Bernd Mohr, from Jülich Supercomputing Centre, was the Keynote Speaker of this year’s edition of the event.  A well-known figure in the HPC community at large, Bernd presented the talk Parallel Performance Analysis at Scale: From Single Node to one Million HPC Cores. In an amazing voyage through different architecture setups, Bernd highlighted the importance and challenges of performance analysis.

    For Esteban Meneses, Costa Rica HPC School General Chair, the School is a key element in building a stronger and more connected HPC community in the region. This year, thanks to the RISC2 project, we were able to gather participants from Guatemala, El Salvador, and Colombia. Creating these ties is fundamental for later developing more complex initiatives. We aim at preparing future scientists that will develop groundbreaking computer applications that tackle the most pressing problems of our region.

    More information here. 

    The post Costa Rica HPC School 2023 aimed at teaching the fundamental tools and methodologies in parallel programming first appeared on RISC2 Project.

    ]]>
    Mapping human brain functions using HPC https://www.risc2-project.eu/2023/02/01/mapping-human-brain-functions-using-hpc/ Wed, 01 Feb 2023 13:17:19 +0000 https://www.risc2-project.eu/?p=2697 ContentMAP is the first Portuguese project in the field of Psychology and Cognitive Neuroscience to be awarded with European Research Council grant (ERC Starting Grant #802553). In this project one is mapping how the human brain represents object knowledge – for example, how one represents in the brain all one knows about a knife (that […]

    The post Mapping human brain functions using HPC first appeared on RISC2 Project.

    ]]>
    ContentMAP is the first Portuguese project in the field of Psychology and Cognitive Neuroscience to be awarded with European Research Council grant (ERC Starting Grant #802553). In this project one is mapping how the human brain represents object knowledge – for example, how one represents in the brain all one knows about a knife (that it cuts, that it has a handle, that is made out of metal and plastic or metal and wood, that it has a serrated and sharp part, that it is smooth and cold, etc.)? To do this, the project collects numerous MRI images while participants see and interact with objects (fMRI). HPC (High Performance Computing) is of central importance for processing these images . The use of HPC has allowed to manipulate these data, perform analysis with machine learning and complex computing in a timely manner.

    Humans are particularly efficient at recognising objects – think about what surrounds us: one recognises the object where one is reading the text from as a screen, the place where one sits as a chair, the utensil in which one drinks coffee as a cup, and one does all of this extremely quickly and virtually automatically. One is able to do all this despite the fact that 1) one holds large amounts of information about each object (if one is asked to write down everything you know about a pen, you would certainly have a lot to say); and that 2) there are several exemplars of each object type (a glass can be tall, made out of glass, metal, paper or plastic, it can be different colours, etc. – but despite that, any of them would still be a glass). How does one do this? How one is able to store and process so much information in the process of recognising a glass, and generalise all the different instances of a glass to get the concept “glass”? The goal of the ContentMAP is to understand the processes that lead to successful object recognition.

    The answer to these question lies in better understanding of the organisational principles of information in the brain. It is, in fact, the efficient organisation of conceptual information and object representations in the brain that allows one to quickly and efficiently recognise the keyboard that is in front of each of us. To study the neuronal organisation of object knowledge, the project collects large sets of fMRI data from several participants, and then try to decode the organisational principles of information in the brain.

    Given the amount of data and the computational requirements of this type of data at the level of pre-processing and post processing, the use of HPC is essential to enable these studies to be conducted in a timely manner. For example, at the post-processing level, the project uses whole brain Support Vector Machine classification algorithms (searchlight procedures) that require hundreds of thousands of classifiers to be trained. Moreover, for each of these classifiers one needs to compute a sample distribution of the average, as well as test the various classifications of interest, and this has to be done per participant.

    Because of this, the use of HPC facilities of of the Advanced Computing Laboratory (LCA) at University of Coimbra is crucial. It allows us to actually perform these analyses in one to two weeks – something that on our 14-core computers would take a few months, which in pratice would mean, most probably, that the analysis would not be done. 

    By Faculty of Psychology and Educational Sciences, University of Coimbra

     

    Reference 

    ProAction Lab http://proactionlab.fpce.uc.pt/ 

    The post Mapping human brain functions using HPC first appeared on RISC2 Project.

    ]]>
    Webinar: Addressing the challenges of scientific visualization in the exascale age https://www.risc2-project.eu/events/webinar-addressing-the-challenges-of-scientific-visualization-in-the-exascale-age/ Tue, 24 Jan 2023 10:56:42 +0000 https://www.risc2-project.eu/?post_type=mec-events&p=2668 Date: May 31, 2023 | 4 p.m. (UTC+1) Speaker: João Barbosa, INESC TEC & MACC Moderator: Bernd Mohr, Jülich Supercomputing Centre (JSC) In the coming age of exascale computing, traditional post-hoc scientific visualization and analysis suffer similar challenges as numeric simulation. This talk will cover new methodologies of scientific visualization in high-performance computing systems specially designed for […]

    The post Webinar: Addressing the challenges of scientific visualization in the exascale age first appeared on RISC2 Project.

    ]]>

    Date: May 31, 2023 | 4 p.m. (UTC+1)

    In the coming age of exascale computing, traditional post-hoc scientific visualization and analysis suffer similar challenges as numeric simulation. This talk will cover new methodologies of scientific visualization in high-performance computing systems specially designed for large-scale scientific visualization that provides greater scalability, flexibility, and detail to overcome some of these challenges.

    About the speaker: João Barbosa joined the Minho Advanced Computing Center (MACC) in March 2020 as a full-time researcher in High-performance Computing, specializing in Scientific Visualization. Previously, he was part of the Texas Advanced Computing Center (TACC) Scalable Visualization team. As Research Associate at TACC, João has worked on several Scientific Visualization (SciVis) projects ranging from high-level applications such as Gas and Oil to low-level high-performance software packages in partnership with leading hardware and software companies. His current research focuses on high-performance real-time in-situ photo-realistic ray tracing for SciVis.

     

     

    The post Webinar: Addressing the challenges of scientific visualization in the exascale age first appeared on RISC2 Project.

    ]]>
    Webinar: Developing complex workflows that include HPC, Artificial Intelligence and Data Analytics https://www.risc2-project.eu/events/webinar-5-developing-complex-workflows-that-include-hpc-artificial-intelligence-and-data-analytics/ Tue, 24 Jan 2023 10:51:32 +0000 https://www.risc2-project.eu/?post_type=mec-events&p=2661 Date: February 22, 2023 | 4 p.m. (UTC) Speaker: Rosa M. Badia, Barcelona Supercomputing Center Moderator: Esteban Mocskos, Universidad de Buenos Aires The evolution of High-Performance Computing (HPC) systems towards every-time more complex machines is opening the opportunity of hosting larger and heterogeneous applications. In this sense, the demand for developing applications that are not purely HPC, but […]

    The post Webinar: Developing complex workflows that include HPC, Artificial Intelligence and Data Analytics first appeared on RISC2 Project.

    ]]>

    Date: February 22, 2023 | 4 p.m. (UTC)

    Speaker: Rosa M. Badia, Barcelona Supercomputing Center

    Moderator: Esteban Mocskos, Universidad de Buenos Aires

    The evolution of High-Performance Computing (HPC) systems towards every-time more complex machines is opening the opportunity of hosting larger and heterogeneous applications. In this sense, the demand for developing applications that are not purely HPC, but that combine aspects of Artifical Intelligence and or Data analytics is becoming more common. However, there is a lack of environments that support the development of these complex workflows. The webinar will present PyCOMPSs, a parallel task-based programming in Python. Based on simple annotations, sequential Python programs can be executed in parallel in HPC-clusters and other distributed infrastructures.

    PyCOMPSs has been extended to support tasks that invoke HPC applications and can be combined with Artificial Intelligence and Data analytics frameworks.

    Some of these extensions are made in the framework of the eFlows4HPC project, which in addition is developing the HPC Workflows as a Service (HPCWaaS) methodology to make the development, deployment, execution and reuse of workflows easier. The webinar will present the current status of the PyCOMPSs programming model and how it is being extended in the eFlows4HPC project towards the project needs. Also, the HPCWaaS methodology will be introduced

    About the speaker: Rosa M. Badia holds a PhD on Computer Science (1994) from the Technical University of Catalonia (UPC).  She is the manager of the Workflows and Distributed Computing research group at the Barcelona Supercomputing Center (BSC).

    Her current research interests are programming models for complex platforms (from edge, fog, to Clouds and large HPC systems).  The group led by Dr. Badia has been developing StarSs programming model for more than 15 years, with a high success in adoption by application developers. Currently the group focuses its efforts in PyCOMPSs/COMPSs, an instance of the programming model for distributed computing including Cloud.

    Dr Badia has published nearly 200 papers in international conferences and journals in the topics of her research. Her group is very active in projects funded by the European Commission and in contracts with industry. Dr Badia is the PI of the eFlows4HPC project.

    Registrations are now closed.

     

    The post Webinar: Developing complex workflows that include HPC, Artificial Intelligence and Data Analytics first appeared on RISC2 Project.

    ]]>
    JUPITER Ascending – First European Exascale Supercomputer Coming to Jülich https://www.risc2-project.eu/2023/01/02/jupiter-ascending-first-european-exascale-supercomputer-coming-to-julich/ Mon, 02 Jan 2023 12:14:22 +0000 https://www.risc2-project.eu/?p=2637 It was finally decided in 2022: Forschungszentrum Jülich will be home to Europe’s first exascale computer. The supercomputer is set to be the first in Europe to surpass the threshold of one trillion (“1” followed by 18 zeros) calculations per second. The system will be acquired by the European supercomputing initiative EuroHPC JU. The exascale computer […]

    The post JUPITER Ascending – First European Exascale Supercomputer Coming to Jülich first appeared on RISC2 Project.

    ]]>
    It was finally decided in 2022: Forschungszentrum Jülich will be home to Europe’s first exascale computer. The supercomputer is set to be the first in Europe to surpass the threshold of one trillion (“1” followed by 18 zeros) calculations per second. The system will be acquired by the European supercomputing initiative EuroHPC JU. The exascale computer should help to solve important and urgent scientific questions regarding, for example, climate change, how to combat pandemics, and sustainable energy production, while also enabling the intensive use of artificial intelligence and the analysis of large data volumes. The overall costs for the system amount to 500 million euros. Of this total, 250 million euros is being provided by EuroHPC JU and a further 250 million euros in equal parts by the German Federal Ministry of Education and Research (BMBF) and the Ministry of Culture and Science of the State of North Rhine-Westphalia (MKW NRW).

    The computer named JUPITER (short for “Joint Undertaking Pioneer for Innovative and Transformative Exascale Research”) will be installed 2023/2024 on the campus of Forschungszentrum Jülich. It is intended that the system will be operated by the Jülich Supercomputing Centre (JSC), whose supercomputers JUWELS and JURECA currently rank among the most powerful in the world. JSC has participated in the application procedure for a high-end supercomputer as a member of the Gauss Centre for Supercomputing (GCS), an association of the three German national supercomputing centres JSC in Jülich, High Performance Computing Stuttgart (HLRS), and Leibniz Computing Centre (LRZ) in Garching. The competition was organized by the European supercomputing initiative EuroHPC JU, which was formed by the European Union together with European countries and private companies. 

    JUPITER is now set to become the first European supercomputer to make the leap into the exascale class. In terms of computing power, it will be more powerful that 5 million modern laptops of PCs. Just like Jülich’s current supercomputer JUWELS, JUPITER will be based on a dynamic, modular supercomputing architecture, which Forschungszentrum Jülich developed together with European and international partners in the EU’s DEEP research projects.

    In a modular supercomputer, various computing modules are coupled together. This enables program parts of complex simulations to be distributed over several modules, ensuring that the various hardware properties can be optimally utilized in each case. Its modular construction also means that the system is well prepared for integrating future technologies such as quantum computing or neurotrophic modules, which emulate the neural structure of a biological brain.

    Figure Modular Supercomputing Architecture: Computing and storage modules of the exascale computer in its basis configuration (blue) as well as optional modules (green) and modules for future technologies (purple) as possible extensions. 

    In its basis configuration, JUPITER will have and enormously powerful booster module with highly efficient GPU-based computation accelerators. Massively parallel applications are accelerated by this booster in a similar way to a turbocharger, for example to calculate high-resolution climate models, develop new materials, simulate complex cell processes and energy systems, advanced basic research, or train next-generation, computationally intensive machine-learning algorithms.

    One major challenge is the energy that is required for such large computing power. The average power is anticipated to be up to 15 megawatts. JUPITER has been designed as a “green” supercomputer and will be powered by green electricity. The envisaged warm water cooling system should help to ensure that JUPITER achieves the highest efficiency values. At the same time, the cooling technology opens up the possibility of intelligently using the waste heat  that is produced. For example, just like its predecessor system JUWELS, JUPITER will be connected to the new low-temperature network on the Forschungszentrum Jülich campus. Further potential applications for the waste heat from JUPITER are currently being investigated by Forschungszentrum Jülich.

    By Jülich Supercomputing Centre (JSC)

     

    The first image is JUWELS: Germany’s fastest supercomputer JUWELS at Forschungszentrum Jülich, which is funded in equal parts by the Federal Ministry of Education and Research (BMBF) and the Ministry of Culture and Science of the State of North Rhine-Westphalia (MKW NRW) via the Gauss Centre for Supercomputing (GCS). (Copyright: Forschungszentrum Jülich / Sascha Kreklau)

    The post JUPITER Ascending – First European Exascale Supercomputer Coming to Jülich first appeared on RISC2 Project.

    ]]>