scientific applications - RISC2 Project https://www.risc2-project.eu Mon, 11 Sep 2023 15:01:49 +0000 en-US hourly 1 https://wordpress.org/?v=6.6.2 Scientific Machine Learning and HPC https://www.risc2-project.eu/2023/06/28/scientific-machine-learning-and-hpc/ Wed, 28 Jun 2023 08:24:28 +0000 https://www.risc2-project.eu/?p=2863 In recent years we have seen rapid growth in interest in artificial intelligence in general, and machine learning (ML) techniques, particularly in different branches of science and engineering. The rapid growth of the Scientific Machine Learning field derives from the combined development and use of efficient data analysis algorithms, the availability of data from scientific […]

The post Scientific Machine Learning and HPC first appeared on RISC2 Project.

]]>
In recent years we have seen rapid growth in interest in artificial intelligence in general, and machine learning (ML) techniques, particularly in different branches of science and engineering. The rapid growth of the Scientific Machine Learning field derives from the combined development and use of efficient data analysis algorithms, the availability of data from scientific instruments and computer simulations, and advances in high-performance computing. On May 25 2023, COPPE/UFRJ organized a forum to discuss Artificial Intelligence developments and its impact on the society [*].

As the coordinator of the High Performance Computing Center (Nacad) at COPPE/UFRJ, Alvaro Coutinho, presented advances in AI in Engineering and the importance of multidisciplinary research networks to address current issues in Scientific Machine Learning. Alvaro took the opportunity to highlight the need for Brazil to invest in high performance computing capacity.

The country’s sovereignty needs autonomy in producing ML advances, which depends on HPC support at the Universities and Research Centers. Brazil has nine machines in the Top 500 list of the most powerful computer systems in the world, but almost all at Petrobras company, and Universities need much more. ML is well-known to require HPC, when combined to scientific computer simulations it becomes essential.

The conventional notion of ML involves training an algorithm to automatically discover patterns, signals, or structures that may be hidden in huge databases and whose exact nature is unknown and therefore cannot be explicitly programmed. This field may face two major drawbacks: the need for a significant volume of (labelled) expensive to acquire data and limitations for extrapolating (making predictions beyond scenarios contained in the trained data difficult).

Considering that an algorithm’s predictive ability is a learning skill, current challenges must be addressed to improve the analytical and predictive capacity of Scientific ML algorithms, for example, to maximize its impact in applications of renewable energy. References [1-5] illustrate recent advances in Scientific Machine Learning in different areas of engineering and computer science.

References:

[*] https://www.coppe.ufrj.br/pt-br/planeta-coppe-noticias/noticias/coppe-e-sociedade-especialistas-debatem-os-reflexos-da-inteligencia

[1] Baker, Nathan, Steven L. Brunton, J. Nathan Kutz, Krithika Manohar, Aleksandr Y. Aravkin, Kristi Morgansen, Jennifer Klemisch, Nicholas Goebel, James Buttrick, Jeffrey Poskin, Agnes Blom-Schieber, Thomas Hogan, Darren McDonaldAlexander, Frank, Bremer, Timo, Hagberg, Aric, Kevrekidis, Yannis, Najm, Habib, Parashar, Manish, Patra, Abani, Sethian, James, Wild, Stefan, Willcox, Karen, and Lee, Steven. Workshop Report on Basic Research Needs for Scientific Machine Learning: Core Technologies for Artificial Intelligence. United States: N. p., 2019. Web. doi:10.2172/1478744.

[2] Brunton, Steven L., Bernd R. Noack, and Petros Koumoutsakos. “Machine learning for fluid mechanics.” Annual Review of Fluid Mechanics 52 (2020): 477-508.

[3] Karniadakis, George Em, et al. “Physics-informed machine learning.” Nature Reviews Physics 3.6 (2021): 422-440.

[4] Inria White Book on Artificial Intelligence: Current challenges and Inria’s engagement, 2nd edition, 2021. URL: https://www.inria.fr/en/white-paper-inria-artificial-intelligence

[5] Silva, Romulo, Umair bin Waheed, Alvaro Coutinho, and George Em Karniadakis. “Improving PINN-based Seismic Tomography by Respecting Physical Causality.” In AGU Fall Meeting Abstracts, vol. 2022, pp. S11C-09. 2022.

The post Scientific Machine Learning and HPC first appeared on RISC2 Project.

]]>
More than 100 students participated in the HPC, Data & Architecture Week https://www.risc2-project.eu/2023/03/21/more-than-100-students-participated-in-the-hpc-data-architecture-week/ Tue, 21 Mar 2023 10:18:44 +0000 https://www.risc2-project.eu/?p=2790 RISC2 supported the ‘HPC, Data & Architecture Week’, which took place between March 13 and 17, 2023, in Buenos Aires. This initiative aimed to recover and deepen the training of human resources for the development of scientific applications and their efficient use in parallel computing environments. This event had four main courses: “Foundations of Parallel […]

The post More than 100 students participated in the HPC, Data & Architecture Week first appeared on RISC2 Project.

]]>
RISC2 supported the ‘HPC, Data & Architecture Week’, which took place between March 13 and 17, 2023, in Buenos Aires. This initiative aimed to recover and deepen the training of human resources for the development of scientific applications and their efficient use in parallel computing environments.

This event had four main courses: “Foundations of Parallel Programming”, “Large scale data processing and machine learning”, “New architectures and specific computing platforms”, and “Administrations techniques for large-scale computing facilities”.

More than 100 students actively participated in the event who traveled from different part of the country. 30 students received financial support to participate (traveling and living) provided by the National HPC System (SNCAD) dependent of the Argentina’s Ministry of Science.

Esteban Mocskos, one of the organizers of the event, believes “this kind of events should be organized regularly to sustain the flux of students in the area of HPC”. In his opinion, “a lot of students from Argentina get their first contact with HPC topics. As such a large country, impacting a distant region also means impacting the neighboring countries. Those students will bring their experience to other students in their places”. According to Mocskos, initiatives like the “HPC, Data & Architecture Week” spark a lot of collaborations.

The post More than 100 students participated in the HPC, Data & Architecture Week first appeared on RISC2 Project.

]]>
Developing Efficient Scientific Gateways for Bioinformatics in Supercomputer Environments Supported by Artificial Intelligence https://www.risc2-project.eu/2023/03/20/developing-efficient-scientific-gateways-for-bioinformatics-in-supercomputer-environments-supported-by-artificial-intelligence/ Mon, 20 Mar 2023 09:37:46 +0000 https://www.risc2-project.eu/?p=2781 Scientific gateways bring enormous benefits to end users by simplifying access and hiding the complexity of the underlying distributed computing infrastructure. Gateways require significant development and maintenance efforts. BioinfoPortal[1], through its CSGrid[2]  middleware, takes advantage of Santos Dumont [3] heterogeneous resources. However, task submission still requires a substantial step regarding deciding the best configuration that […]

The post Developing Efficient Scientific Gateways for Bioinformatics in Supercomputer Environments Supported by Artificial Intelligence first appeared on RISC2 Project.

]]>
Scientific gateways bring enormous benefits to end users by simplifying access and hiding the complexity of the underlying distributed computing infrastructure. Gateways require significant development and maintenance efforts. BioinfoPortal[1], through its CSGrid[2]  middleware, takes advantage of Santos Dumont [3] heterogeneous resources. However, task submission still requires a substantial step regarding deciding the best configuration that leads to efficient execution. This project aims to develop green and intelligent scientific gateways for BioinfoPortal supported by high-performance computing environments (HPC) and specialised technologies such as scientific workflows, data mining, machine learning, and deep learning. The efficient analysis and interpretation of Big Data opens new challenges to explore molecular biology, genetics, biomedical, and healthcare to improve personalised diagnostics and therapeutics; finding new avenues to deal with this massive amount of information becomes necessary. New Bioinformatics and Computational Biology paradigms drive storage, management, and data access. HPC and Big Data advanced in this domain represent a vast new field of opportunities for bioinformatics researchers and a significant challenge. the BioinfoPortal science gateway is a multiuser Brazilian infrastructure. We present several challenges for efficiently executing applications and discuss the findings on improving the use of computational resources. We performed several large-scale bioinformatics experiments that are considered computationally intensive and time-consuming. We are currently coupling artificial intelligence to generate models to analyze computational and bioinformatics metadata to understand how automatic learning can predict computational resources’ efficient use. The computational executions are conducted at Santos Dumont, the largest supercomputer in Latin America, dedicated to the research community with 5.1 Petaflops and 36,472 computational cores distributed in 1,134 computational nodes.

By:

Carneiro, B. Fagundes, C. Osthoff, G. Freire, K. Ocaña, L. Cruz, L. Gadelha, M. Coelho, M. Galheigo, and R. Terra are with the National Laboratory of Scientific Computing, Rio de Janeiro, Brazil.

Carvalho is with the Federal Center for Technological Education Celso Suckow da Fonseca, Rio de Janeiro, Brazil.

Douglas Cardoso is with the Polytechnic Institute of Tomar, Portugal.

Boito and L, Teylo is with the University of Bordeaux, CNRS, Bordeaux INP, INRIA, LaBRI, Talence, France.

Navaux is with the Informatics Institute, the Federal University of Rio Grande do Sul, and Rio Grande do Sul, Brazil.

References:

Ocaña, K. A. C. S.; Galheigo, M.; Osthoff, C.; Gadelha, L. M. R.; Porto, F.; Gomes, A. T. A.; Oliveira, D.; Vasconcelos, A. T. BioinfoPortal: A scientific gateway for integrating bioinformatics applications on the Brazilian national high-performance computing network. Future Generation Computer Systems, v. 107, p. 192-214, 2020.

Mondelli, M. L.; Magalhães, T.; Loss, G.; Wilde, M.; Foster, I.; Mattoso, M. L. Q.; Katz, D. S.; Barbosa, H. J. C.; Vasconcelos, A. T. R.; Ocaña, K. A. C. S; Gadelha, L. BioWorkbench: A High-Performance Framework for Managing and Analyzing Bioinformatics Experiments. PeerJ, v. 1, p. 1, 2018.

Coelho, M.; Freire, G.; Ocaña, K.; Osthoff, C.; Galheigo, M.; Carneiro, A. R.; Boito, F.; Navaux, P.; Cardoso, D. O. Desenvolvimento de um Framework de Aprendizado de Máquina no Apoio a Gateways Científicos Verdes, Inteligentes e Eficientes: BioinfoPortal como Caso de Estudo Brasileiro In: XXIII Simpósio em Sistemas Computacionais de Alto Desempenho – WSCAD 2022 (https://wscad.ufsc.br/), 2022.

Terra, R.; Ocaña, K.; Osthoff, C.; Cruz, L.; Boito, F.; Navaux, P.; Carvalho, D. Framework para a Construção de Redes Filogenéticas em Ambiente de Computação de Alto Desempenho. In: XXIII Simpósio em Sistemas Computacionais de Alto Desempenho – WSCAD 2022 (https://wscad.ufsc.br/), 2022.

Ocaña, K.; Cruz, L.; Coelho, M.; Terra, R.; Galheigo, M.; Carneiro, A.; Carvalho, D.; Gadelha, L.; Boito, F.; Navaux, P.; Osthoff, C. ParslRNA-Seq: an efficient and scalable RNAseq analysis workflow for studies of differentiated gene expression. In: Latin America High-Performance Computing Conference (CARLA), 2022, Rio Grande do Sul, Brazil. Proceedings of the Latin American High-Performance Computing Conference – CARLA 2022 (http://www.carla22.org/), 2022.

[1] https://bioinfo.lncc.br/

[2] https://git.tecgraf.puc-rio.br/csbase-dev/csgrid/-/tree/CSGRID-2.3-LNCC

[3] https://https://sdumont.lncc.br

The post Developing Efficient Scientific Gateways for Bioinformatics in Supercomputer Environments Supported by Artificial Intelligence first appeared on RISC2 Project.

]]>
Webinar: Improving energy-efficiency of High-Performance Computing clusters https://www.risc2-project.eu/events/webinar-7-improving-energy-efficiency-of-high-performance-computing-clusters/ Thu, 26 Jan 2023 13:37:07 +0000 https://www.risc2-project.eu/?post_type=mec-events&p=2666 Date: April 26, 2023 | 3 p.m. (UTC+1) Speakers: Lubomir Riha and Ondřej Vysocký, IT4Innovations National Supercomputing Center Moderator: Esteban Mocskos, Universidad de Buenos Aires High-Performance Computing centers consume megawatts of electrical power, which is a limiting factor in building bigger systems on the path to exascale and post-exascale clusters. Such high power consumption leads to several challenges […]

The post Webinar: Improving energy-efficiency of High-Performance Computing clusters first appeared on RISC2 Project.

]]>

Date: April 26, 2023 | 3 p.m. (UTC+1)

Speakers: Lubomir Riha and Ondřej Vysocký, IT4Innovations National Supercomputing Center

Moderator: Esteban Mocskos, Universidad de Buenos Aires

High-Performance Computing centers consume megawatts of electrical power, which is a limiting factor in building bigger systems on the path to exascale and post-exascale clusters. Such high power consumption leads to several challenges including robust power supply and its network, enormous energy bills, or significant CO2 emissions. To increase power efficiency, vendors accommodate various heterogeneous hardware that must be fully utilized by users’ applications, to be used efficiently. Such requirements may be hard to fulfill, which open a possibility of limiting the available resources for additional power and energy savings with no or small performance penalty.

The talk will present best practices on how to grant rights to control hardware parameters, how to measure the energy consumption of the hardware, and what can be expected from performing energy-saving activities based on hardware tuning.

About the speakers:

Lubomir Riha, Ph.D. is the Head of the Infrastructure Research Lab at IT4Innovations National Supercomputing Center. Previously he was a research scientist in the High-Performance Computing Lab at George Washington University, ECE Department. He received his Ph.D. degree in Electrical Engineering from the Czech Technical University in Prague, Czech Republic, and a Ph.D. degree in Computer Science from Bowie State University, USA. Currently, he is a local principal investigator of two EuroHPC Centers of Excellence: MAX and SPACE, and two EuroHPC projects: SCALABLE and EUPEX (designs a prototype of the European Exascale machine). Previously he was a local PI of the H2020 Center of Excellence POP2 and H2020-FET HPC READEX projects. His research interests are optimization of HPC applications, energy-efficient computing, acceleration of scientific and engineering applications using GPU and many-core accelerators, and parallel and distributed rendering.

Ondrej Vysocky is a Ph.D. candidate at VSB – Technical University of Ostrava, Czech Republic and at the same time he works at IT4Innovations in Infrastructure Research Lab. His research is focused on energy efficiency in high-performance computing. He was an investigator of the Horizon 2020 READEX project which dealt with the energy efficiency of parallel applications using dynamic tuning. Since that time, he develops a MERIC library, a runtime system for energy measurement and hardware parameters tuning during a parallel application run. Using the library he is an investigator of several H2020 projects including Performance Optimisation and Productivity (POP2), or European Pilot for Exascale (EUPEX). He is also a member of the PowerStack initiative, which works on a holistic, extensible, and scalable approach of power management.

The post Webinar: Improving energy-efficiency of High-Performance Computing clusters first appeared on RISC2 Project.

]]>
Webinar: Addressing the challenges of scientific visualization in the exascale age https://www.risc2-project.eu/events/webinar-addressing-the-challenges-of-scientific-visualization-in-the-exascale-age/ Tue, 24 Jan 2023 10:56:42 +0000 https://www.risc2-project.eu/?post_type=mec-events&p=2668 Date: May 31, 2023 | 4 p.m. (UTC+1) Speaker: João Barbosa, INESC TEC & MACC Moderator: Bernd Mohr, Jülich Supercomputing Centre (JSC) In the coming age of exascale computing, traditional post-hoc scientific visualization and analysis suffer similar challenges as numeric simulation. This talk will cover new methodologies of scientific visualization in high-performance computing systems specially designed for […]

The post Webinar: Addressing the challenges of scientific visualization in the exascale age first appeared on RISC2 Project.

]]>

Date: May 31, 2023 | 4 p.m. (UTC+1)

In the coming age of exascale computing, traditional post-hoc scientific visualization and analysis suffer similar challenges as numeric simulation. This talk will cover new methodologies of scientific visualization in high-performance computing systems specially designed for large-scale scientific visualization that provides greater scalability, flexibility, and detail to overcome some of these challenges.

About the speaker: João Barbosa joined the Minho Advanced Computing Center (MACC) in March 2020 as a full-time researcher in High-performance Computing, specializing in Scientific Visualization. Previously, he was part of the Texas Advanced Computing Center (TACC) Scalable Visualization team. As Research Associate at TACC, João has worked on several Scientific Visualization (SciVis) projects ranging from high-level applications such as Gas and Oil to low-level high-performance software packages in partnership with leading hardware and software companies. His current research focuses on high-performance real-time in-situ photo-realistic ray tracing for SciVis.

 

 

The post Webinar: Addressing the challenges of scientific visualization in the exascale age first appeared on RISC2 Project.

]]>
JUPITER Ascending – First European Exascale Supercomputer Coming to Jülich https://www.risc2-project.eu/2023/01/02/jupiter-ascending-first-european-exascale-supercomputer-coming-to-julich/ Mon, 02 Jan 2023 12:14:22 +0000 https://www.risc2-project.eu/?p=2637 It was finally decided in 2022: Forschungszentrum Jülich will be home to Europe’s first exascale computer. The supercomputer is set to be the first in Europe to surpass the threshold of one trillion (“1” followed by 18 zeros) calculations per second. The system will be acquired by the European supercomputing initiative EuroHPC JU. The exascale computer […]

The post JUPITER Ascending – First European Exascale Supercomputer Coming to Jülich first appeared on RISC2 Project.

]]>
It was finally decided in 2022: Forschungszentrum Jülich will be home to Europe’s first exascale computer. The supercomputer is set to be the first in Europe to surpass the threshold of one trillion (“1” followed by 18 zeros) calculations per second. The system will be acquired by the European supercomputing initiative EuroHPC JU. The exascale computer should help to solve important and urgent scientific questions regarding, for example, climate change, how to combat pandemics, and sustainable energy production, while also enabling the intensive use of artificial intelligence and the analysis of large data volumes. The overall costs for the system amount to 500 million euros. Of this total, 250 million euros is being provided by EuroHPC JU and a further 250 million euros in equal parts by the German Federal Ministry of Education and Research (BMBF) and the Ministry of Culture and Science of the State of North Rhine-Westphalia (MKW NRW).

The computer named JUPITER (short for “Joint Undertaking Pioneer for Innovative and Transformative Exascale Research”) will be installed 2023/2024 on the campus of Forschungszentrum Jülich. It is intended that the system will be operated by the Jülich Supercomputing Centre (JSC), whose supercomputers JUWELS and JURECA currently rank among the most powerful in the world. JSC has participated in the application procedure for a high-end supercomputer as a member of the Gauss Centre for Supercomputing (GCS), an association of the three German national supercomputing centres JSC in Jülich, High Performance Computing Stuttgart (HLRS), and Leibniz Computing Centre (LRZ) in Garching. The competition was organized by the European supercomputing initiative EuroHPC JU, which was formed by the European Union together with European countries and private companies. 

JUPITER is now set to become the first European supercomputer to make the leap into the exascale class. In terms of computing power, it will be more powerful that 5 million modern laptops of PCs. Just like Jülich’s current supercomputer JUWELS, JUPITER will be based on a dynamic, modular supercomputing architecture, which Forschungszentrum Jülich developed together with European and international partners in the EU’s DEEP research projects.

In a modular supercomputer, various computing modules are coupled together. This enables program parts of complex simulations to be distributed over several modules, ensuring that the various hardware properties can be optimally utilized in each case. Its modular construction also means that the system is well prepared for integrating future technologies such as quantum computing or neurotrophic modules, which emulate the neural structure of a biological brain.

Figure Modular Supercomputing Architecture: Computing and storage modules of the exascale computer in its basis configuration (blue) as well as optional modules (green) and modules for future technologies (purple) as possible extensions. 

In its basis configuration, JUPITER will have and enormously powerful booster module with highly efficient GPU-based computation accelerators. Massively parallel applications are accelerated by this booster in a similar way to a turbocharger, for example to calculate high-resolution climate models, develop new materials, simulate complex cell processes and energy systems, advanced basic research, or train next-generation, computationally intensive machine-learning algorithms.

One major challenge is the energy that is required for such large computing power. The average power is anticipated to be up to 15 megawatts. JUPITER has been designed as a “green” supercomputer and will be powered by green electricity. The envisaged warm water cooling system should help to ensure that JUPITER achieves the highest efficiency values. At the same time, the cooling technology opens up the possibility of intelligently using the waste heat  that is produced. For example, just like its predecessor system JUWELS, JUPITER will be connected to the new low-temperature network on the Forschungszentrum Jülich campus. Further potential applications for the waste heat from JUPITER are currently being investigated by Forschungszentrum Jülich.

By Jülich Supercomputing Centre (JSC)

 

The first image is JUWELS: Germany’s fastest supercomputer JUWELS at Forschungszentrum Jülich, which is funded in equal parts by the Federal Ministry of Education and Research (BMBF) and the Ministry of Culture and Science of the State of North Rhine-Westphalia (MKW NRW) via the Gauss Centre for Supercomputing (GCS). (Copyright: Forschungszentrum Jülich / Sascha Kreklau)

The post JUPITER Ascending – First European Exascale Supercomputer Coming to Jülich first appeared on RISC2 Project.

]]>
Advanced Computing Collaboration to Growth Sustainable Ecosystems https://www.risc2-project.eu/2022/12/12/advanced-computing-collaboration-to-growth-sustainable-ecosystems/ Mon, 12 Dec 2022 10:45:48 +0000 https://www.risc2-project.eu/?p=2612 The impact of High-Performance Computing (HPC) in different contexts related to the needs of high capabilities and strategies to simulate or to compute is very known. In the development of the RISC2 project, observing the project’s main goals, it is not a potential impact to support scientific challenges recognised after the exploration but an essential […]

The post Advanced Computing Collaboration to Growth Sustainable Ecosystems first appeared on RISC2 Project.

]]>
The impact of High-Performance Computing (HPC) in different contexts related to the needs of high capabilities and strategies to simulate or to compute is very known. In the development of the RISC2 project, observing the project’s main goals, it is not a potential impact to support scientific challenges recognised after the exploration but an essential requirement for scientific, productive, and social activities. Different outcomes are presented in the academic spaces as the workshops and main tracks of the Latin American Conference on High-Performance Computing (CARLA 2023). In these spaces, different RISC2 proposals show how HPC allows competitiveness, demands collaboration to attack global interests, and guarantees sustainability.

In the European and Latin American (EuroLatAm) HPC ecosystems, it tis possible to identify actors in different domains: industry, academy, research, society, and government. Each of them, at different levels, has a group of demands or interactions, depending on the interests. I.e., the industry demands capabilities to have HPC solutions for productivity and wants skills from the academy to perform development actors to build applications to use solutions. Another example could be the relationship between research and the government. In the HPC Ecosystem, collaborations allow synergies to face common interests. Still, it demands policies and coordinated roadmaps to support long-term projects and activities with a clear impact on society.

Of course, a historical relationship exists between Latin America and Europe from colonial history. In the case of advanced computing projects, it is possible to identify, from the first EuroLatAm Grid Computing projects more than twenty years ago until the real supercomputing projects such as RISC and RISC2. Still, now, more with shared interests and the different EuroLatAm HPC projects improve competitiveness and collaboration. Competitiveness for industrial and productive business, partnership (and competitiveness) in science and education goals, and human wellness. So paraphrasing Mateo Valero “who does not compute does not compete”, I would add “who does not collaborate does not survive”.

Taking collaboration and competitiveness, the RISC2 project allows identifying sustainability elements and sustainable workflows for different projects. The impressive interaction between the actors of the HPC EuroLatAm ecosystem has not only given scientific results but also policies, recommendations, best practices, and new questions. For these outcomes, in the past 2022 Supercomputing Conference, RISC2 was awarded the 2022 HPCWire Editors’ Choice Award as the Best HPC Collaboration.

Sustainable advanced computing ecosystems and their growth are evident with the knowledge of the results of projects such as RISC2. Collaboration, interaction, and competitiveness build human development and guarantee development, technological diversification, and peer-to-peer relationships to attack common interests and problems. So, RISC2 is a crucial step to advance to a RISC3 as it was at the time of the previous RISC.

 

By Universidad Industrial de Santander

The post Advanced Computing Collaboration to Growth Sustainable Ecosystems first appeared on RISC2 Project.

]]>
Managing Data and Machine Learning Models in HPC Applications https://www.risc2-project.eu/2022/11/21/managing-data-and-machine-learning-models-in-hpc-applications/ Mon, 21 Nov 2022 14:09:42 +0000 https://www.risc2-project.eu/?p=2508 The synergy of data science (including big data and machine learning) and HPC yields many benefits for data-intensive applications in terms of more accurate predictive data analysis and better decision making. For instance, in the context of the HPDaSc (High Performance Data Science) project between Inria and Brazil, we have shown the importance of realtime […]

The post Managing Data and Machine Learning Models in HPC Applications first appeared on RISC2 Project.

]]>
The synergy of data science (including big data and machine learning) and HPC yields many benefits for data-intensive applications in terms of more accurate predictive data analysis and better decision making. For instance, in the context of the HPDaSc (High Performance Data Science) project between Inria and Brazil, we have shown the importance of realtime analytics to make critical high-consequence decisions in HPC applications, e.g., preventing useless drilling based on a driller’s realtime data and realtime visualization of simulated data, or the effectiveness of ML to deal with scientific data, e.g., computing Probability Density Functions (PDFs) over simulated seismic data using Spark.

However, to realize the full potential of this synergy, ML models (or models for short) must be built, combined and ensembled, which can be very complex as there can be many models to select from. Furthermore, they should be shared and reused, in particular, in different execution environments such as HPC or Spark clusters.

To address this problem, we proposed Gypscie [Porto 2022, Zorrilla 2022], a new framework that supports the entire ML lifecycle and enables model reuse and import from other frameworks. The approach behind Gypscie is to combine several rich capabilities for model and data management, and model execution, which are typically provided by different tools, in a unique framework. Overall, Gypscie provides: a platform for supporting the complete model life-cycle, from model building to deployment, monitoring and policies enforcement; an environment for casual users to find ready-to-use models that best fit a particular prediction problem, an environment to optimize ML task scheduling and execution; an easy way for developers to benchmark their models against other competitive models and improve them; a central point of access to assess models’ compliance to policies and ethics and obtain and curate observational and predictive data; provenance information and model explainability. Finally, Gypscie interfaces with multiple execution environments to run ML tasks, e.g., an HPC system such as the Santos Dumont supercomputer at LNCC or a Spark cluster. 

Gypscie comes with SAVIME [Silva 2020], a multidimensional array in-memory database system for importing, storing and querying model (tensor) data. The SAVIME open-source system has been developed to support analytical queries over scientific data. Its offers an extremely efficient ingestion procedure, which practically eliminates the waiting time to analyze incoming data. It also supports dense and sparse arrays and non-integer dimension indexing. It offers a functional query language processed by a query optimiser that generates efficient query execution plans.

 

References

[Porto 2022] Fabio Porto, Patrick Valduriez: Data and Machine Learning Model Management with Gypscie. CARLA 2022 – Workshop on HPC and Data Sciences meet Scientific Computing, SCALAC, Sep 2022, Porto Alegre, Brazil. pp.1-2. 

[Zorrilla 2022] Rocío Zorrilla, Eduardo Ogasawara, Patrick Valduriez, Fabio Porto: A Data-Driven Model Selection Approach to Spatio-Temporal Prediction. SBBD 2022 – Brazilian Symposium on Databases, SBBD, Sep 2022, Buzios, Brazil. pp.1-12. 

[Silva 2020] A.C. Silva, H. Lourenço, D. Ramos, F. Porto, P. Valduriez. Savime: An Array DBMS for Simulation Analysis and Prediction. Journal of Information Data Management 11(3), 2020. 

 

By LNCC and Inria 

The post Managing Data and Machine Learning Models in HPC Applications first appeared on RISC2 Project.

]]>
Using supercomputing for accelerating life science solutions https://www.risc2-project.eu/2022/11/01/using-supercomputing-for-accelerating-life-science-solutions/ Tue, 01 Nov 2022 14:11:06 +0000 https://www.risc2-project.eu/?p=2504 The world of High Performance Computing (HPC) is now moving towards exascale performance, i.e. the ability of calculating 1018 operations per second. A variety of applications will be improved to take advantage of this computing power, leading to better prediction and models in different fields, like Environmental Sciences, Artificial Intelligence, Material Sciences and Life Sciences. In […]

The post Using supercomputing for accelerating life science solutions first appeared on RISC2 Project.

]]>
The world of High Performance Computing (HPC) is now moving towards exascale performance, i.e. the ability of calculating 1018 operations per second. A variety of applications will be improved to take advantage of this computing power, leading to better prediction and models in different fields, like Environmental Sciences, Artificial Intelligence, Material Sciences and Life Sciences.

In Life Sciences, HPC advancements can improve different areas:

  • a reduced time to scientific discovery;
  • the ability of generating predictions necessary for precision medicine;
  • new healthcare and genomics-driven research approaches;
  • the processing of huge datasets for deep and machine learning;
  • the optimization of modeling, such as Computer Aided Drug Design (CADD);
  • enhanched security and protection of healthcare data in HPC environments, in compliance with European GDPR regulations;
  • management of massive amount of data for example for clinical trials, drug development and genomics data analytics.

The outbreak of COVID-19 has further accelerated this progress from different points of view. Some European projects aim at reusing known and active ingredients to prepare new drugs as contrast therapy against COVID disease [Exscalate4CoV, Ligate], while others focus on the management and monitoring of contagion clusters to provide an innovative approach to learn from SARS-CoV-2 crisis and derive recommendations for future waves and pandemics [Orchestra].

The ability to deal with massive amounts of data in HPC environments is also used to create databases with data from nucleic acids sequencing and use them to detect allelic variant frequencies, as in the NIG project [Nig], a collaboration with the Network for Italian Genomes. Another example of usage of this capability is the set-up of data sharing platform based on novel Federated Learning schemes, to advance research in personalised medicine in haematological diseases [Genomed4All].

Supercomputing is widely used in Drug Design (the process of finding medicines for disease for which there are no or insufficient treatments), with many projects active in this field just like RISC2.

Sometimes, when there is no previous knowledge of the biological target, just like what happened with COVID-19, discovering new drugs requires creating from scratch new molecules [Novartis]. This process involves billion dollar investments to produce and test thousands of molecules and it usually has a low success rate: only about 12% of potential drugs entering the clinical development are approved [Engitix]. The whole process from identifying a possible compound to the end of the clinical trial can take up to 10 years. Nowadays there is an uneven coverage of disease: most of the compounds are used for genetic conditions, while only a few antiviral and antibiotics have been found.

The search for candidate drugs occurs mainly through two different approaches: high-throughput screening and virtual screening. The first one is more reliable but also very expensive and time consuming: it is usually applied when dealing with well-known targets by mainly pharmaceutical companies. The second approach is a good compromise between cost and accuracy and is typically applied against relatively new targets, in academics laboratories, where it is also used to discover or understand better mechanisms of these targets. [Liu2016]

Candidate drugs are usually small molecules that bind to a specific protein or part of it, inhibiting the usual activity of the protein itself. For example, binding the correct ligand to a vial enzyme may stop viral infection. In the process of virtual screening million of compounds are screened against the target protein at different levels: the most basic one simply takes into account the shape to correctly fit into the protein, at higher level also other features are considered as specific interactions, protein flexibility, solubility, human tolerance, and so on. A “score” is assigned to each docked ligand: compounds with highest score are further studied. With massively parallel computers, we can rapidly filter extremely large molecule databases (e.g. billions of molecules).

The current computational power of HPC clusters allow us to analyze up to 3 million compounds per second [Exscalate]. Even though vaccines were developed remarkably quickly, effective drug treatments for people already suffering from covid-19 were very fresh at the beginning of the pandemic. At that time, supercomputers around the world were asked to help with drug design, a real-world example of the power of Urgent Computing. CINECA participates in Exscalate4cov [Exscalate4Cov], currently the most advanced center of competence for fighting the coronavirus, combining the most powerful supercomputing resources and Artificial Intelligence with experimental facilities and clinical validation. 

 

References

[Engitix] https://engitix.com/technology/

[Exscalate] https://www.exscalate.eu/en/projects.html

[Exscalate4CoV] https://www.exscalate4cov.eu/

[Genomed4All] https://genomed4all.eu/

[Ligate] https://www.ligateproject.eu/

[Liu2016] T. Liu, D. Lu, H. Zhang, M. Zheng, H. Yang, Ye. Xu, C. Luo, W. Zhu, K. Yu, and H. Jiang, “Applying high-performance computing in drug discovery and molecular simulation” Natl Sci Rev. 2016 Mar; 3(1): 49–63.

[Nig] http://www.nig.cineca.it/

[Novartis] https://www.novartis.com/stories/art-drug-design-technological-age

[Orchestra] https://orchestra-cohort.eu/

 

By CINECA

The post Using supercomputing for accelerating life science solutions first appeared on RISC2 Project.

]]>
HPC meets AI and Big Data https://www.risc2-project.eu/2022/10/06/hpc-meets-ai-and-big-data/ Thu, 06 Oct 2022 08:23:34 +0000 https://www.risc2-project.eu/?p=2413 HPC services are no longer solely targeted at highly parallel modelling and simulation tasks. Indeed, the computational power offered by these services is now being used to support data-centric Big Data and Artificial Intelligence (AI) applications. By combining both types of computational paradigms, HPC infrastructures will be key for improving the lives of citizens, speeding […]

The post HPC meets AI and Big Data first appeared on RISC2 Project.

]]>
HPC services are no longer solely targeted at highly parallel modelling and simulation tasks. Indeed, the computational power offered by these services is now being used to support data-centric Big Data and Artificial Intelligence (AI) applications. By combining both types of computational paradigms, HPC infrastructures will be key for improving the lives of citizens, speeding up scientific breakthrough in different fields (e.g., health, IoT, biology, chemistry, physics), and increasing the competitiveness of companies [OG+15, NCR+18].

As the utility and usage of HPC infrastructures increases, more computational and storage power is required to efficiently handle the amount of targeted applications. In fact, many HPC centers are now aiming at exascale supercomputers supporting at least one exaFLOPs (1018 operations per second), which represents a thousandfold increase in processing power over the first petascale computer deployed in 2008 [RD+15]. Although this is a necessary requirement for handling the increasing number of HPC applications, there are several outstanding challenges that still need to be tackled so that this extra computational power can be fully leveraged. 

Management of large infrastructures and heterogeneous workloads: By adding more compute and storage nodes, one is also increasing the complexity of the overall HPC distributed infrastructure and making it harder to monitor and manage. This complexity is increased due to the need of supporting highly heterogeneous applications that translate into different workloads with specific data storage and processing needs [ECS+17]. For example, on the one hand, traditional scientific modeling and simulation tasks require large slices of computational time, are CPU-bound, and rely on iterative approaches (parametric/stochastic modeling). On the other hand, data-driven Big Data applications contemplate shorter computational tasks, that are I/O bound and, in some cases, have real-time response requirements (i.e., latency-oriented). Also, many of the applications leverage AI and machine learning tools that require specific hardware (e.g., GPUs) in order to be efficient.

Support for general-purpose analytics: The increased heterogeneity also demands that HPC infrastructures are now able to support general-purpose AI and BigData applications that were not designed explicitly to run on specialised HPC hardware [KWG+13]. Therefore, developers are not required to significantly change their applications so that they can execute efficiently at HPC clusters.

Avoiding the storage bottleneck: By only increasing the computational power and improving the management of HPC infrastructures it may still not be possible to fully harmed the capabilities of these infrastructures. In fact, Big Data and AI applications are data-driven and require efficient data storage and retrieval from HPC clusters. With an increasing number of applications and heterogeneous workloads, the storage systems supporting HPC may easily become a bottleneck [YDI+16, ECS+17]. Indeed, as pointed out by several studies, the storage access time is one of the major bottlenecks limiting the efficiency of current and next-generation HPC infrastructures. 

In order to address these challenges, RISC2 partners are exploring: New monitoring and debugging tools that can aid in the analysis of complex AI and Big Data workloads in order to pinpoint potential performance and efficiency bottlenecks, while helping system administrators and developers on troubleshooting these [ENO+21].

Emerging virtualization technologies, such as containers, that enable users to efficiently deploy and execute traditional AI and BigData applications in an HPC environment, without requiring any changes to their source-code [FMP21].  

The Software-Defined Storage paradigm in order to improve the Quality-of-Service (QoS) for HPC’s storage services when supporting hundreds to thousands of data-intensive AI and Big Data applications [DLC+22, MTH+22].  

To sum up, these three research goals, and respective contributions, will enable the next generation of HPC infrastructures and services that can efficiently meet the demands of Big Data and AI workloads. 

 

References

[DLC+22] Dantas, M., Leitão, D., Cui, P., Macedo, R., Liu, X., Xu, W., Paulo, J., 2022. Accelerating Deep Learning Training Through Transparent Storage Tiering. IEEE/ACM International Symposium on Cluster, Cloud and Internet Computing (CCGrid)  

[ECS+17] Joseph, E., Conway, S., Sorensen, B., Thorp, M., 2017. Trends in the Worldwide HPC Market (Hyperion Presentation). HPC User Forum at HLRS.  

[FMP21] Faria, A., Macedo, R., Paulo, J., 2021. Pods-as-Volumes: Effortlessly Integrating Storage Systems and Middleware into Kubernetes. Workshop on Container Technologies and Container Clouds (WoC’21). 

[KWG+13] Katal, A., Wazid, M. and Goudar, R.H., 2013. Big data: issues, challenges, tools and good practices. International conference on contemporary computing (IC3). 

[NCR+18] Netto, M.A., Calheiros, R.N., Rodrigues, E.R., Cunha, R.L. and Buyya, R., 2018. HPC cloud for scientific and business applications: Taxonomy, vision, and research challenges. ACM Computing Surveys (CSUR). 

[MTH+22] Macedo, R., Tanimura, Y., Haga, J., Chidambaram, V., Pereira, J., Paulo, J., 2022. PAIO: General, Portable I/O Optimizations With Minor Application Modifications. USENIX Conference on File and Storage Technologies (FAST). 

[OG+15] Osseyran, A. and Giles, M. eds., 2015. Industrial applications of high-performance computing: best global practices. 

[RD+15] Reed, D.A. and Dongarra, J., 2015. Exascale computing and big data. Communications of the ACM. 

[ENO+21] Esteves, T., Neves, F., Oliveira, R., Paulo, J., 2021. CaT: Content-aware Tracing and Analysis for Distributed Systems. ACM/IFIP Middleware conference (Middleware). 

[YDI+16] Yildiz, O., Dorier, M., Ibrahim, S., Ross, R. and Antoniu, G., 2016, May. On the root causes of cross-application I/O interference in HPC storage systems. IEEE International Parallel and Distributed Processing Symposium (IPDPS). 

 

By INESC TEC

The post HPC meets AI and Big Data first appeared on RISC2 Project.

]]>