isc 2021 - RISC2 Project https://www.risc2-project.eu Wed, 22 Nov 2023 08:23:24 +0000 en-US hourly 1 https://wordpress.org/?v=6.6.2 Three years of building bridges in HPC research between Europe and Latin America: RISC2 project comes to an end https://www.risc2-project.eu/2023/11/16/three-years-of-building-bridges-in-hpc-research-between-europe-and-latin-america-risc2-project-comes-to-an-end/ Thu, 16 Nov 2023 15:56:54 +0000 https://www.risc2-project.eu/?p=3066 Artificial intelligence, personalised medicine, the development of new drugs or the fight against climate change. These are just a few examples of areas where high performance computing has an impact and could prove to be essential. With the aim of fostering cooperation between Europe and Latin America in this field, 16 organisations from the two […]

The post Three years of building bridges in HPC research between Europe and Latin America: RISC2 project comes to an end first appeared on RISC2 Project.

]]>
Artificial intelligence, personalised medicine, the development of new drugs or the fight against climate change. These are just a few examples of areas where high performance computing has an impact and could prove to be essential. With the aim of fostering cooperation between Europe and Latin America in this field, 16 organisations from the two continents have launched the RISC2 project.

“The RISC2 project has proven to be a team effort in which European and Latin American partners worked together to drive HPC collaboration forward. We have been able to create a lively and active community across the Atlantic to stimulate dialogue and boost cooperation that won’t die with RISC2’s formal end”, says Fabrizio Gagliardi, manager director of RISC2.

Since 2021, this knowledge-sharing network has organised webinars, summer schools, meetings with policymakers and participated in conferences and dissemination events on both sides of the Atlantic. The project also resulted in the HPC Observatory Repository — a collection of documents and training materials produced as part of the project – and the White Paper on HPC R&I Collaboration Opportunities, a document that reviews the key socio-economic and environmental factors and trends that influence HPC needs.

These were two of the issues highlighted by European Commission officials and experts during the final evaluation of the project, which could provide continuity to the work carried out by the consortium over the last three years, in line with the wishes of the partners and the advice of the evaluators. “Beyond RISC2, we should keep the momentum and leverage the importance of Latin America in the frame of the Green Deal actions: HPC stakeholders should encourage policymakers to build bilateral agreements and offer open calls focused on HPC collaboration“, reflects Fabrizio Gagliardi.

The post Three years of building bridges in HPC research between Europe and Latin America: RISC2 project comes to an end first appeared on RISC2 Project.

]]>
Scientific Machine Learning and HPC https://www.risc2-project.eu/2023/06/28/scientific-machine-learning-and-hpc/ Wed, 28 Jun 2023 08:24:28 +0000 https://www.risc2-project.eu/?p=2863 In recent years we have seen rapid growth in interest in artificial intelligence in general, and machine learning (ML) techniques, particularly in different branches of science and engineering. The rapid growth of the Scientific Machine Learning field derives from the combined development and use of efficient data analysis algorithms, the availability of data from scientific […]

The post Scientific Machine Learning and HPC first appeared on RISC2 Project.

]]>
In recent years we have seen rapid growth in interest in artificial intelligence in general, and machine learning (ML) techniques, particularly in different branches of science and engineering. The rapid growth of the Scientific Machine Learning field derives from the combined development and use of efficient data analysis algorithms, the availability of data from scientific instruments and computer simulations, and advances in high-performance computing. On May 25 2023, COPPE/UFRJ organized a forum to discuss Artificial Intelligence developments and its impact on the society [*].

As the coordinator of the High Performance Computing Center (Nacad) at COPPE/UFRJ, Alvaro Coutinho, presented advances in AI in Engineering and the importance of multidisciplinary research networks to address current issues in Scientific Machine Learning. Alvaro took the opportunity to highlight the need for Brazil to invest in high performance computing capacity.

The country’s sovereignty needs autonomy in producing ML advances, which depends on HPC support at the Universities and Research Centers. Brazil has nine machines in the Top 500 list of the most powerful computer systems in the world, but almost all at Petrobras company, and Universities need much more. ML is well-known to require HPC, when combined to scientific computer simulations it becomes essential.

The conventional notion of ML involves training an algorithm to automatically discover patterns, signals, or structures that may be hidden in huge databases and whose exact nature is unknown and therefore cannot be explicitly programmed. This field may face two major drawbacks: the need for a significant volume of (labelled) expensive to acquire data and limitations for extrapolating (making predictions beyond scenarios contained in the trained data difficult).

Considering that an algorithm’s predictive ability is a learning skill, current challenges must be addressed to improve the analytical and predictive capacity of Scientific ML algorithms, for example, to maximize its impact in applications of renewable energy. References [1-5] illustrate recent advances in Scientific Machine Learning in different areas of engineering and computer science.

References:

[*] https://www.coppe.ufrj.br/pt-br/planeta-coppe-noticias/noticias/coppe-e-sociedade-especialistas-debatem-os-reflexos-da-inteligencia

[1] Baker, Nathan, Steven L. Brunton, J. Nathan Kutz, Krithika Manohar, Aleksandr Y. Aravkin, Kristi Morgansen, Jennifer Klemisch, Nicholas Goebel, James Buttrick, Jeffrey Poskin, Agnes Blom-Schieber, Thomas Hogan, Darren McDonaldAlexander, Frank, Bremer, Timo, Hagberg, Aric, Kevrekidis, Yannis, Najm, Habib, Parashar, Manish, Patra, Abani, Sethian, James, Wild, Stefan, Willcox, Karen, and Lee, Steven. Workshop Report on Basic Research Needs for Scientific Machine Learning: Core Technologies for Artificial Intelligence. United States: N. p., 2019. Web. doi:10.2172/1478744.

[2] Brunton, Steven L., Bernd R. Noack, and Petros Koumoutsakos. “Machine learning for fluid mechanics.” Annual Review of Fluid Mechanics 52 (2020): 477-508.

[3] Karniadakis, George Em, et al. “Physics-informed machine learning.” Nature Reviews Physics 3.6 (2021): 422-440.

[4] Inria White Book on Artificial Intelligence: Current challenges and Inria’s engagement, 2nd edition, 2021. URL: https://www.inria.fr/en/white-paper-inria-artificial-intelligence

[5] Silva, Romulo, Umair bin Waheed, Alvaro Coutinho, and George Em Karniadakis. “Improving PINN-based Seismic Tomography by Respecting Physical Causality.” In AGU Fall Meeting Abstracts, vol. 2022, pp. S11C-09. 2022.

The post Scientific Machine Learning and HPC first appeared on RISC2 Project.

]]>
Leveraging HPC technologies to unravel epidemic dynamics https://www.risc2-project.eu/2022/10/17/leveraging-hpc-technologies-to-unravel-epidemic-dynamics/ Mon, 17 Oct 2022 08:10:17 +0000 https://www.risc2-project.eu/?p=2419 When we talk about the 14th century, we probably are making reference to one of the most adverse periods of human history. It was an era of regular armed conflicts, declining social systems, famine, and disease. It was the time of the bubonic plague pandemics, the Black Death, that wiped out millions of people in […]

The post Leveraging HPC technologies to unravel epidemic dynamics first appeared on RISC2 Project.

]]>
When we talk about the 14th century, we probably are making reference to one of the most adverse periods of human history. It was an era of regular armed conflicts, declining social systems, famine, and disease. It was the time of the bubonic plague pandemics, the Black Death, that wiped out millions of people in Europe, Africa, and Asia [1].

Several factors contributed to the catastrophic outcomes of the Black Death. The crises was boosted by the lack of two important components: knowledge and technology. There was no clue about the spread dynamics of the disease, and containment policies were desperately based on assumptions or beliefs. Some opted for self-isolation to get away from the bad airthat was believed to be the cause of the illness [2]. Others thought the plague was a divine punishment and persecuted the heretics in order to appease the heavens[3]. Though the first of these two strategies was actually very effective, the second one only increased the tragedy of that scenario. 

The bubonic plague of the 14th century is a great example of how unfortunate ignorance can be in the context of epidemics. If the transmission mechanisms are not well-understood, we are not able to design productive measures against them. We may end up such as our medieval predecessors making things much more worse. Fortunately, the advances in science and technology have provided humanity with powerful tools to comprehend infectious diseases and rapidly develop response plans. In this particular matter, epidemic models and simulations have become crucial. 

In the recent COVID-19 events, many public health authorities relied on the outcomes of models, so as to determine the most probable paths of the epidemic and make informed decisions regarding sanitary measures [4]. Epidemic models have been around for a long time, and have become more and more sophisticated. One reason is the fact that they feed on data that has to be collected and processed, and which has increased in quantity and variety.  

Data contains interesting patterns that give hints about the influence of apparently non-epidemiological factors such as mobility and interaction type [5]. This is how, in the 19th century, John Snow managed to discover the cause of a cholera epidemic in Soho. He plotted the registered cholera cases in a map and saw they clustered around a water pump that he presumed was contaminated [6]. Thanks to Dr. Snow’s findings, water quality started to be considered as an important component of public health. 

As models grow in intricacy, the demand for more powerful computing systems also increases. In advanced approaches such as agent-based [7] and network (graph) models [8], every person is represented inside a complex framework in which the infection spreads according to specific rules. These rules could be related to the nature of the relations between individuals, their number of contacts, the places they visit, disease characteristics, and even stochastic influences. Frameworks are commonly composed of millions of individuals too, because we often want to analyze countrywide effects. 

In brief, to unravel epidemic dynamics we need to process and produce a lot of accurate information, and we need to do it fast. High-performance computing (HPC) systems provide high-spec hardware and support advanced techniques such as parallel computing, which accelerate calculation by using several resources at a time to perform one or different tasks concurrently. This is an advantage for stochastic epidemic models that require hundreds of independent executions to deliver reliable outputs. Frameworks with millions of nodes or agents need several GB of memory to be processed, which is a requirement that can be met only by HPC systems. 

Based on the work of Cruz et al. [9], we developed a model that represents the spread dynamics of COVID-19 in Costa Rica [10]. This model consists of a contact network of five million nodes, in which every Costa Rican citizen has a family, school, work, or random connection with their neighbors. These relations impact the probability of getting infected, as well as the infection statusof the neighbors. The infection status varies with time, as people evolve from not having symptoms to have mild, severe, or critical conditions. People may be asymptomatic as well. The model also addresses variations in location, school and workplace sizes, age, mobility, and vaccination rates. In addition, some of these inputs are stochastic. 

Such model takes only a few hours to be simulated in an HPC cluster, when normal systems would require much more time. We managed to evaluate scenarios in which different sanitary measures were changed or eliminated. This analysis brought interesting results, such as that going to a meeting with our family or friends could be as harmful as attending a concert with dozens of strangers, in terms of the additional infections that these activities would generate. Such findings are valuable inputs for health authorities, because they demonstrate that preventing certain behaviors in the population can delay the peak of infections and give them more time to save lives. 

Even though HPC has been fundamental in computational epidemiology to give key insights into epidemic dynamics, we still have to leverage this technology in some contexts. For example, we must first strengthen health and information systems in developing countries to get the maximum advantage of HPC and epidemic models. The above can be achieved through interinstitutional and international collaboration, but also through national policies that support research and development. If we encourage the study of infectious diseases, we benefit from this knowledge in a way that we can approach other pandemics better in the future. 

 

References

[1] Encyclopedia Britannica. n.d. Crisis, recovery, and resilience: Did the Middle Ages end?. [online] Available at: <https://www.britannica.com/topic/history-of-Europe/Crisis-recovery-and-resilience-Did-the-Middle-Ages-end> [Accessed 13 September 2022]. 

[2] Mellinger, J., 2006. Fourteenth-Century England, Medical Ethics, and the Plague. AMA Journal of Ethics, 8(4), pp.256-260. 

[3] Carr, H., 2020. Black Death Quarantine: How Did We Try To Contain The Deadly Disease?. [online] Historyextra.com. Available at: <https://www.historyextra.com/period/medieval/plague-black-death-quarantine-history-how-stop-spread/> [Accessed 13 September 2022]. 

[4] McBryde, E., Meehan, M., Adegboye, O., Adekunle, A., Caldwell, J., Pak, A., Rojas, D., Williams, B. and Trauer, J., 2020. Role of modelling in COVID-19 policy development. Paediatric Respiratory Reviews, 35, pp.57-60. 

[5] Pasha, D., Lundeen, A., Yeasmin, D. and Pasha, M., 2021. An analysis to identify the important variables for the spread of COVID-19 using numerical techniques and data science. Case Studies in Chemical and Environmental Engineering, 3, p.100067. 

[6] Bbc.co.uk. 2014. Historic Figures: John Snow (1813 – 1858). [online] Available at: <https://www.bbc.co.uk/history/historic_figures/snow_john.shtml> [Accessed 13 September 2022]. 

[7] Publichealth.columbia.edu. 2022. Agent-Based Modeling. [online] Available at: <https://www.publichealth.columbia.edu/research/population-health-methods/agent-based-modeling> [Accessed 13 September 2022]. 

[8] Keeling, M. and Eames, K., 2005. Networks and epidemic models. Journal of The Royal Society Interface, 2(4), pp.295-307. 

[9] Cruz, E., Maciel, J., Clozato, C., Serpa, M., Navaux, P., Meneses, E., Abdalah, M. and Diener, M., 2021. Simulation-based evaluation of school reopening strategies during COVID-19: A case study of São Paulo, Brazil. Epidemiology and Infection, 149. 

[10] Abdalah, M., Soto, C., Arce, M., Cruz, E., Maciel, J., Clozato, C. and Meneses, E., 2022. Understanding COVID-19 Epidemic in Costa Rica Through Network-Based Modeling. Communications in Computer and Information Science, pp.61-75. 

 

By CeNAT

The post Leveraging HPC technologies to unravel epidemic dynamics first appeared on RISC2 Project.

]]>
HPC meets AI and Big Data https://www.risc2-project.eu/2022/10/06/hpc-meets-ai-and-big-data/ Thu, 06 Oct 2022 08:23:34 +0000 https://www.risc2-project.eu/?p=2413 HPC services are no longer solely targeted at highly parallel modelling and simulation tasks. Indeed, the computational power offered by these services is now being used to support data-centric Big Data and Artificial Intelligence (AI) applications. By combining both types of computational paradigms, HPC infrastructures will be key for improving the lives of citizens, speeding […]

The post HPC meets AI and Big Data first appeared on RISC2 Project.

]]>
HPC services are no longer solely targeted at highly parallel modelling and simulation tasks. Indeed, the computational power offered by these services is now being used to support data-centric Big Data and Artificial Intelligence (AI) applications. By combining both types of computational paradigms, HPC infrastructures will be key for improving the lives of citizens, speeding up scientific breakthrough in different fields (e.g., health, IoT, biology, chemistry, physics), and increasing the competitiveness of companies [OG+15, NCR+18].

As the utility and usage of HPC infrastructures increases, more computational and storage power is required to efficiently handle the amount of targeted applications. In fact, many HPC centers are now aiming at exascale supercomputers supporting at least one exaFLOPs (1018 operations per second), which represents a thousandfold increase in processing power over the first petascale computer deployed in 2008 [RD+15]. Although this is a necessary requirement for handling the increasing number of HPC applications, there are several outstanding challenges that still need to be tackled so that this extra computational power can be fully leveraged. 

Management of large infrastructures and heterogeneous workloads: By adding more compute and storage nodes, one is also increasing the complexity of the overall HPC distributed infrastructure and making it harder to monitor and manage. This complexity is increased due to the need of supporting highly heterogeneous applications that translate into different workloads with specific data storage and processing needs [ECS+17]. For example, on the one hand, traditional scientific modeling and simulation tasks require large slices of computational time, are CPU-bound, and rely on iterative approaches (parametric/stochastic modeling). On the other hand, data-driven Big Data applications contemplate shorter computational tasks, that are I/O bound and, in some cases, have real-time response requirements (i.e., latency-oriented). Also, many of the applications leverage AI and machine learning tools that require specific hardware (e.g., GPUs) in order to be efficient.

Support for general-purpose analytics: The increased heterogeneity also demands that HPC infrastructures are now able to support general-purpose AI and BigData applications that were not designed explicitly to run on specialised HPC hardware [KWG+13]. Therefore, developers are not required to significantly change their applications so that they can execute efficiently at HPC clusters.

Avoiding the storage bottleneck: By only increasing the computational power and improving the management of HPC infrastructures it may still not be possible to fully harmed the capabilities of these infrastructures. In fact, Big Data and AI applications are data-driven and require efficient data storage and retrieval from HPC clusters. With an increasing number of applications and heterogeneous workloads, the storage systems supporting HPC may easily become a bottleneck [YDI+16, ECS+17]. Indeed, as pointed out by several studies, the storage access time is one of the major bottlenecks limiting the efficiency of current and next-generation HPC infrastructures. 

In order to address these challenges, RISC2 partners are exploring: New monitoring and debugging tools that can aid in the analysis of complex AI and Big Data workloads in order to pinpoint potential performance and efficiency bottlenecks, while helping system administrators and developers on troubleshooting these [ENO+21].

Emerging virtualization technologies, such as containers, that enable users to efficiently deploy and execute traditional AI and BigData applications in an HPC environment, without requiring any changes to their source-code [FMP21].  

The Software-Defined Storage paradigm in order to improve the Quality-of-Service (QoS) for HPC’s storage services when supporting hundreds to thousands of data-intensive AI and Big Data applications [DLC+22, MTH+22].  

To sum up, these three research goals, and respective contributions, will enable the next generation of HPC infrastructures and services that can efficiently meet the demands of Big Data and AI workloads. 

 

References

[DLC+22] Dantas, M., Leitão, D., Cui, P., Macedo, R., Liu, X., Xu, W., Paulo, J., 2022. Accelerating Deep Learning Training Through Transparent Storage Tiering. IEEE/ACM International Symposium on Cluster, Cloud and Internet Computing (CCGrid)  

[ECS+17] Joseph, E., Conway, S., Sorensen, B., Thorp, M., 2017. Trends in the Worldwide HPC Market (Hyperion Presentation). HPC User Forum at HLRS.  

[FMP21] Faria, A., Macedo, R., Paulo, J., 2021. Pods-as-Volumes: Effortlessly Integrating Storage Systems and Middleware into Kubernetes. Workshop on Container Technologies and Container Clouds (WoC’21). 

[KWG+13] Katal, A., Wazid, M. and Goudar, R.H., 2013. Big data: issues, challenges, tools and good practices. International conference on contemporary computing (IC3). 

[NCR+18] Netto, M.A., Calheiros, R.N., Rodrigues, E.R., Cunha, R.L. and Buyya, R., 2018. HPC cloud for scientific and business applications: Taxonomy, vision, and research challenges. ACM Computing Surveys (CSUR). 

[MTH+22] Macedo, R., Tanimura, Y., Haga, J., Chidambaram, V., Pereira, J., Paulo, J., 2022. PAIO: General, Portable I/O Optimizations With Minor Application Modifications. USENIX Conference on File and Storage Technologies (FAST). 

[OG+15] Osseyran, A. and Giles, M. eds., 2015. Industrial applications of high-performance computing: best global practices. 

[RD+15] Reed, D.A. and Dongarra, J., 2015. Exascale computing and big data. Communications of the ACM. 

[ENO+21] Esteves, T., Neves, F., Oliveira, R., Paulo, J., 2021. CaT: Content-aware Tracing and Analysis for Distributed Systems. ACM/IFIP Middleware conference (Middleware). 

[YDI+16] Yildiz, O., Dorier, M., Ibrahim, S., Ross, R. and Antoniu, G., 2016, May. On the root causes of cross-application I/O interference in HPC storage systems. IEEE International Parallel and Distributed Processing Symposium (IPDPS). 

 

By INESC TEC

The post HPC meets AI and Big Data first appeared on RISC2 Project.

]]>
RISC2 partners met at UFRJ during the “Eureka meets the Atlantic” event https://www.risc2-project.eu/2022/04/21/risc2-partners-met-at-ufrj-during-the-eureka-meets-the-atlantic-event/ Thu, 21 Apr 2022 13:01:48 +0000 https://www.risc2-project.eu/?p=1812 Alvaro Coutinho and Marta Mattoso, from UFRJ, and Rui Oliveira, from INESC TEC, met during the “EUREKA meets the ATLANTIC” event, which took place between March 29-30 at UFRJ Campus, in Rio de Janeiro, Brazil, to discuss the collaboration between NACAD and MACC. For both RISC2 partner institutions, Earth Observatory is a key strategic R&D area, […]

The post RISC2 partners met at UFRJ during the “Eureka meets the Atlantic” event first appeared on RISC2 Project.

]]>
Alvaro Coutinho and Marta Mattoso, from UFRJ, and Rui Oliveira, from INESC TEC, met during the “EUREKA meets the ATLANTIC” event, which took place between March 29-30 at UFRJ Campus, in Rio de Janeiro, Brazil, to discuss the collaboration between NACAD and MACC. For both RISC2 partner institutions, Earth Observatory is a key strategic R&D area, to which UFRJ and INESC TEC allocate a significant quantity of resources, human and technologic.

“The foreseeable expansion of Eureka to Brazil means new great opportunities to all RISC2 partners”, said Rui Oliveira, member INESC TEC’s Board and director of the Center for Advanced Computing at the University of Minho (MACC).

The post RISC2 partners met at UFRJ during the “Eureka meets the Atlantic” event first appeared on RISC2 Project.

]]>
RISC2 partner is a member of AISIS 2021’s Scientific Committee https://www.risc2-project.eu/2021/11/23/risc2-partner-is-a-member-of-aisis-2021s-scientific-committee/ Tue, 23 Nov 2021 16:18:30 +0000 https://www.risc2-project.eu/?p=1414 Rafael Mayo Garcia, from CIEMAT, one of the RISC2 partners, participated at AISIS 2021 as a part of the Scientific Committee, from the 11th to the 15th of October 2021. Rafael Mayo Garcia joined the scientific committee at the Artificial Intelligence for Science, Industry and Society (AISIS) 2021. AISIS is a conference that brings together […]

The post RISC2 partner is a member of AISIS 2021’s Scientific Committee first appeared on RISC2 Project.

]]>
Rafael Mayo Garcia, from CIEMAT, one of the RISC2 partners, participated at AISIS 2021 as a part of the Scientific Committee, from the 11th to the 15th of October 2021.

Rafael Mayo Garcia joined the scientific committee at the Artificial Intelligence for Science, Industry and Society (AISIS) 2021.

AISIS is a conference that brings together scientists, industry representatives and policy makers and discusses the implementation of AI in a variety of areas and disciplines. This year’s edition had a great focus on how AI has facilitated the global response to the COVID-19 pandemic. Hosted online, the event took place at National Autonomous University of Mexico (UNAM).

According to Rafael Mayo Garcia, he worked “on the definition of the agenda and the review of contributions” with different members from around the world. The program and agenda in which RISC2’s partner had an important role in was composed by several keynote speakers, topics and convenors.

Learn more about this event and Rafael Mayo Garcia’s role in it here.

The post RISC2 partner is a member of AISIS 2021’s Scientific Committee first appeared on RISC2 Project.

]]>
RISC2 with a strong presence at CARLA 2021 https://www.risc2-project.eu/2021/10/05/https-www-risc2-project-eu-2021-10-05-risc2-with-a-strong-presence-at-carla-2021/ Tue, 05 Oct 2021 08:20:04 +0000 http://192.168.10.124/risc/?p=1016 The RISC2 project participated at the Latin America High-Performance Computing Conference (CARLA 2021), which took place between September 27 and October 15, 2021, with 888 registered attendees from 25 different countries. The consortium of the RISC2 project participated in the organization of several activities during this international conference, within the scope of the collaboration between […]

The post RISC2 with a strong presence at CARLA 2021 first appeared on RISC2 Project.

]]>
The RISC2 project participated at the Latin America High-Performance Computing Conference (CARLA 2021), which took place between September 27 and October 15, 2021, with 888 registered attendees from 25 different countries. The consortium of the RISC2 project participated in the organization of several activities during this international conference, within the scope of the collaboration between Europe and Latin America communities, working on HPC-related topics.

CARLA is an international conference aimed at providing a forum to foster the growth and strength of the High-Performance Computing (HPC) community in Latin America through the exchange and dissemination of new ideas, techniques, and research in HPC and its applications areas. The general chair of the 2021 edition is Isidoro Gitler, from Cinvestav, who coordinates and participates in all the event’s activities.

Workshops

Different partners of the RISC2 were involved in the organization of the scientific workshops as part of the CARLA 2021 conference. Between seven workshops organized to this conference, two of them are from the RISC2 consortium. The workshop on HPC Collaboration between Europe and Latin America took place online, on October 5, 2021. The goal was to provide a space dedicated to exchange experiences, towards the promotion and support of new collaborations across different countries of Europe and Latin America, within the framework of the recently launched ‘A network for supporting the coordination of High-Performance Computing research between Europe and Latin America’ (RISC2). This workshop achieved a maximum number of 55 participants, with Pedro Vieira Alberto, from the University of Coimbra, as one of the Invited Speakers. Ulisses Cortés, from Barcelona Supercomputing Center, presented the RISC2 project, as an example of HPC collaboration between Europe and Latin America. The chairs of this event were Ulisses Cortés and Rafael Mayo-García, from CIEMAT.

Another workshop organized by the RISC2 team was the workshop on HPC an Energy, which was held online on October 4, with Álvaro Coutinho, from COPPE, as Chair. This workshop focused on HPC techniques applied to the energy sector, in order to improve and reform many industrial sectors. HPC can provide several solutions to the energy sector, e.g., oil and gas solutions in upstream, midstream, and downstream problems; improve wind energy performance; solve issues of combustion efficiency for transportation systems; making nuclear systems more efficient and safer; improving solar energy systems; optimizing wind energy systems; improving the quality and efficiency of seismic and geophysical simulations, etc.

It is important to mention that Ginés Guerrero, from NLHPC, was one of the chairs of workshops organized by CARLA 2021.

Tutorials

Carla Osthoff, from Laboratório Nacional de Computação Científica, was Tutorial Chair Member of the 11 tutorials that were accepted at the CARLA 2021. CARLA 2021 provided tutorials and hands-on workshops for both introductory and advanced levels, specifically designed to undergraduate and master students all over Latin- American countries. There were two different periods: Fundamental Tutorials, which included six different tutorials the week before CARLA 2021, and Advanced Tutorials, with five different tutorials held a week after CARLA 2021. CARLA 2021 Tutorials were supported by Latin American, Caribbean, and European institutions.
Esteban Mosckos, from the Universidad de Buenos Aires, was involved in the organization of two different tutorials: “OpenMP: Introduction to shared memory models” and “Introduction to Distributed Memory Models using MPI”. Both activities consisted of theory and hands-on exercises, lasting four hours, with close to 40 assistants each.

The NLHPC partner was also responsible for a tutorial focused on Working with a resource manager on an HPC infrastructure, for the use of SLURM. Two more tutorials were organized by NLHPC, including, the tutorial on Performance Analysis Tools, with the participation of one of the members of the NLHPC Scientific Committee, and the other on Quantum Computing, with the participation of IBM.

The CARLA 2021 conference had more than 30 institutions on the board committee and more than 115 in attendance, connected simultaneously.

All the videos are available here.

The post RISC2 with a strong presence at CARLA 2021 first appeared on RISC2 Project.

]]>
National Laboratory for Scientific Computing participated in the ISC2021 https://www.risc2-project.eu/2021/08/13/national-laboratory-for-scientific-computing-participated-in-the-isc2021/ Fri, 13 Aug 2021 09:55:06 +0000 https://www.risc2-project.eu/?p=1799 The National Laboratory for Scientific Computing (LNCC), one of the RISC2 partners from Brazil, presented two posters at the Event for High Performance Computing, Machine Learning and Data Analysis (ISC) 2021. The posters “Developing Efficient Scientific Gateways for Bioinformatics in Supercomputing Environments Supported by Artificial Intelligence” and “Scalable Numerical Method for Biphasic Flows in Heterogeneous […]

The post National Laboratory for Scientific Computing participated in the ISC2021 first appeared on RISC2 Project.

]]>
The National Laboratory for Scientific Computing (LNCC), one of the RISC2 partners from Brazil, presented two posters at the Event for High Performance Computing, Machine Learning and Data Analysis (ISC) 2021.

The posters “Developing Efficient Scientific Gateways for Bioinformatics in Supercomputing Environments Supported by Artificial Intelligence” and “Scalable Numerical Method for Biphasic Flows in Heterogeneous Porous Media in High-Performance Computational Environments” are part of the activities of the LNCC RISC2 projects.

According to Carla Osthoff (LNCC) , former poster presents a collaboration project that aims to develop green and intelligent scientific gateways for bioinformatics supported by high-performance computing environments (HPC) and specialized technologies such as scientific workflows, data mining, machine learning, and deep learning.  The efficient analysis and interpretation of Big Data open new challenges to explore molecular biology, genetics, biomedical, and healthcare to improve personalized diagnostics and therapeutics; then, it becomes necessary to availability of new avenues to deal with this massive amount of information. New paradigms in Bioinformatics and Computational Biology drive the storing, managing, and accessing of data. HPC and Big Data advances in this domain represent a vast new field of opportunities for bioinformatics researchers and a significant challenge. The Bioinfo-Portal science gateway is a multiuser Brazilian infrastructure for bioinformatics applications, benefiting from the HPC infrastructure. We present several challenges for efficiently executing applications and discussing the findings on how to improve the use of computational resources. We performed several large-scale bioinformatics experiments that are considered computationally intensive and time-consuming. We are currently coupling artificial intelligence to generate models to analyze computational and bioinformatics metadata to understand how automatic learning can predict computational resources’ efficient use. The computational executions are carried out at Santos Dumont Supercomputer. This is a multi-disciplinary project requiring expertise from several knowledge areas from four research institutes (LNCC, UFRGS, INRIA Bordeaux, and CENAT in Costa Rica). Finally, Brazilian funding agencies (CNPQ, CAPES) and the RISC-2 project from the European Economic and Social Committee (EESC) support the project.

Latter poster presents a project that aims to develop a scalable numerical approach for biphasic flows in heterogeneous porous media in high-performance computing environments based on the high-performance numerical methodology. In this system, an elliptical subsystem determines the velocity field, and a non-linear hyperbolic equation represents the transport of the flowing phases (saturation equation). The model applies a locally conservative finite element method for the mixing speed. Furthermore, the model employs a high-order non-oscillatory finite volume method, based on central schemes, for the non-linear hyperbolic equation that governs phase saturation. Specifically, the project aims to build scalable codes for a high-performance environment. Identified the bottlenecks in the code, the project is now working in four different research areas. Parallel I/O routines and high-performance visualization to decrease the I/O transfers bottleneck, Parallel programming to reduce code bottlenecks for multicore and manycore architectures. and Adaptive MPI  to decrease the message communication bottleneck. The poster presents the first performance evaluation results used to guide the project research areas. This endeavor is a multi-disciplinary project requiring expertise from several knowledge areas from four research institutes (LNCC, UFRGS, UFLA in Brazil, and CENAT in Costa Rica). Finally, Brazilian funding agencies (CNPQ, CAPES) and the RISC-2 project.

The post National Laboratory for Scientific Computing participated in the ISC2021 first appeared on RISC2 Project.

]]>