society - RISC2 Project https://www.risc2-project.eu Fri, 01 Sep 2023 13:48:31 +0000 en-US hourly 1 https://wordpress.org/?v=6.6.2 Scientific Machine Learning and HPC https://www.risc2-project.eu/2023/06/28/scientific-machine-learning-and-hpc/ Wed, 28 Jun 2023 08:24:28 +0000 https://www.risc2-project.eu/?p=2863 In recent years we have seen rapid growth in interest in artificial intelligence in general, and machine learning (ML) techniques, particularly in different branches of science and engineering. The rapid growth of the Scientific Machine Learning field derives from the combined development and use of efficient data analysis algorithms, the availability of data from scientific […]

The post Scientific Machine Learning and HPC first appeared on RISC2 Project.

]]>
In recent years we have seen rapid growth in interest in artificial intelligence in general, and machine learning (ML) techniques, particularly in different branches of science and engineering. The rapid growth of the Scientific Machine Learning field derives from the combined development and use of efficient data analysis algorithms, the availability of data from scientific instruments and computer simulations, and advances in high-performance computing. On May 25 2023, COPPE/UFRJ organized a forum to discuss Artificial Intelligence developments and its impact on the society [*].

As the coordinator of the High Performance Computing Center (Nacad) at COPPE/UFRJ, Alvaro Coutinho, presented advances in AI in Engineering and the importance of multidisciplinary research networks to address current issues in Scientific Machine Learning. Alvaro took the opportunity to highlight the need for Brazil to invest in high performance computing capacity.

The country’s sovereignty needs autonomy in producing ML advances, which depends on HPC support at the Universities and Research Centers. Brazil has nine machines in the Top 500 list of the most powerful computer systems in the world, but almost all at Petrobras company, and Universities need much more. ML is well-known to require HPC, when combined to scientific computer simulations it becomes essential.

The conventional notion of ML involves training an algorithm to automatically discover patterns, signals, or structures that may be hidden in huge databases and whose exact nature is unknown and therefore cannot be explicitly programmed. This field may face two major drawbacks: the need for a significant volume of (labelled) expensive to acquire data and limitations for extrapolating (making predictions beyond scenarios contained in the trained data difficult).

Considering that an algorithm’s predictive ability is a learning skill, current challenges must be addressed to improve the analytical and predictive capacity of Scientific ML algorithms, for example, to maximize its impact in applications of renewable energy. References [1-5] illustrate recent advances in Scientific Machine Learning in different areas of engineering and computer science.

References:

[*] https://www.coppe.ufrj.br/pt-br/planeta-coppe-noticias/noticias/coppe-e-sociedade-especialistas-debatem-os-reflexos-da-inteligencia

[1] Baker, Nathan, Steven L. Brunton, J. Nathan Kutz, Krithika Manohar, Aleksandr Y. Aravkin, Kristi Morgansen, Jennifer Klemisch, Nicholas Goebel, James Buttrick, Jeffrey Poskin, Agnes Blom-Schieber, Thomas Hogan, Darren McDonaldAlexander, Frank, Bremer, Timo, Hagberg, Aric, Kevrekidis, Yannis, Najm, Habib, Parashar, Manish, Patra, Abani, Sethian, James, Wild, Stefan, Willcox, Karen, and Lee, Steven. Workshop Report on Basic Research Needs for Scientific Machine Learning: Core Technologies for Artificial Intelligence. United States: N. p., 2019. Web. doi:10.2172/1478744.

[2] Brunton, Steven L., Bernd R. Noack, and Petros Koumoutsakos. “Machine learning for fluid mechanics.” Annual Review of Fluid Mechanics 52 (2020): 477-508.

[3] Karniadakis, George Em, et al. “Physics-informed machine learning.” Nature Reviews Physics 3.6 (2021): 422-440.

[4] Inria White Book on Artificial Intelligence: Current challenges and Inria’s engagement, 2nd edition, 2021. URL: https://www.inria.fr/en/white-paper-inria-artificial-intelligence

[5] Silva, Romulo, Umair bin Waheed, Alvaro Coutinho, and George Em Karniadakis. “Improving PINN-based Seismic Tomography by Respecting Physical Causality.” In AGU Fall Meeting Abstracts, vol. 2022, pp. S11C-09. 2022.

The post Scientific Machine Learning and HPC first appeared on RISC2 Project.

]]>
Towards a greater HPC capacity in Latin America https://www.risc2-project.eu/2023/02/24/towards-a-greater-hpc-capacity-in-latin-america/ Fri, 24 Feb 2023 15:36:39 +0000 https://www.risc2-project.eu/?p=2739 High-Performance Computing (HPC) has proven to be a strong driver for science and technology development, and is increasingly considered indispensable for most scientific disciplines. HPC is making a difference in key topics of great interest such as climate change, personalised medicine, engineering, astronomy, education, economics, industry and public policy, becoming a pillar for the development […]

The post Towards a greater HPC capacity in Latin America first appeared on RISC2 Project.

]]>
High-Performance Computing (HPC) has proven to be a strong driver for science and technology development, and is increasingly considered indispensable for most scientific disciplines. HPC is making a difference in key topics of great interest such as climate change, personalised medicine, engineering, astronomy, education, economics, industry and public policy, becoming a pillar for the development of any country, and to which the great powers are giving strategic importance and investing billions of dollars, in competition without limits where data is the new gold.

A country that does not have the computational capacity to solve its own problems will have no alternative but to try to acquire solutions provided by others. One of the most important aspects of sovereignty in the 21st century is the ability to produce mathematical models and to have the capacity to solve them. Today, the availability of computing power commensurate with one’s wealth exponentially increases a country’s capacity to produce knowledge. in the developed world, it is estimated that for every dollar invested in supercomputing, the return to society is of the order of US$ 44(1) and to the academic world US$ 30(2). For these reasons, HPC occupies an important place on the political and diplomatic agendas of developed countries. 

In Latin America, investment in HPC is very low compared to what’s the US, Asia and Europe are doing. In order to quantify this difference, we present the tables below, which show the accumulated computing capacity in the ranking of the 500 most powerful supercomputers in the world – the TOP500(3) – (Table 1), and the local reality (Table 2). Other data are also included, such as the population (in millions), the number of researchers per 1,000 inhabitants (Res/1000), the computing capacity per researcher (Gflops/Res) and the computing capacity per US$ million of GPD. In Table 1, we have grouped the countries by geographical area. America appears as the area with the highest computing capacity, essentially due to the USA, which has almost 45% of the world’s computing capacity in the TOP500. It if followed by Asia and then Europe. Tis TOP500 list includes mainly academic research centres, but also industry ones, typically those used in applied research (many private ones do not wish to publish such information for obvious reasons). For example, in Brazil – which shows good computing capacity with 88,175 TFlops – the vast majority is in the hands of the oil industry and only about 3,000 TFlops are used for basic research. Countries listed in the TOP500 invest in HPC from a few TFlops per million GDP (Belgium 5, Spain 7, Bulgaria 8), through countries investing in the order of hundreds (Italy 176, Japan 151, USA 138), to even thousands, as is the case in Finland with 1,478. For those countries where we were able to find data on the number of researchers, these range from a few Gflops per researcher (Belgium 19, Spain 24, Hungary 52) to close to 1,000 GFlops, i.e. 1 TFlop (USA 970, Italy 966), with Finland surpassing this barrier with 4,647. Note that, unlike what happens locally, countries with a certain degree of development invest every 3-4 years in supercomputing, so the data we are showing will soon be updated and there will be variations in the list. For example, this year a new supercomputer will come into operation in Spain(4), which, with an investment of some 150 million euros, will give Spain one of the most powerful supercomputers in Europe – and the world.

Country Rpeak 

(TFlops)

Population

(millions)

Res/1000 GFlops/Res Tflops/M US$
United States 3.216.124 335 9.9 969.7 138.0
Canada 71.911 39 8.8 209.5 40.0
Brazil 88.175 216 1.1 371.1  51.9
AMERICA 3.376.211 590      
           
China 1.132.071 1400     67.4
Japan 815.667 124 10.0 657.8 151.0
South Korea 128.264 52 16.6 148.6 71.3
Saudi Arabia 98.982 35     141.4
Taiwan 19.562 23     21.7
Singapore 15.785 6     52.6
Thailand 13.773 70     27.5
United Arab Emirates 12.164 10     15.2
India 12.082 1380     4.0
ASIA 2.248.353 3100      
           
Finland 443.391 6 15.9 4647.7 1478.0
Italy 370.262 59 6.5 965.5 176.3
Germany 331.231 85 10.1 385.8 78.9
France 251.166 65 11.4 339.0 83.7
Russia 101.737 145     59.8
United Kingdom 92.563 68 9.6 141.8 29.9
Netherlands 56.740 18 10.6 297.4 56.7
Switzerland 38.600 9 9.4 456.3 48.3
Sweden 32.727 10 15.8 207.1 54.5
Ireland 26.320 5 10.6 496.6 65.8
Luxembourg 18.291 0.6     365.8
Poland 17.099 38 7.6 59.2 28.5
Norway 17.031 6 13.0 218.3 34.1
Czech Republic 12.914 10 8.3 155.6 43.0
Spain 10.296 47 7.4 29.6 7.4
Slovenia 10.047 2 9.9 507.4 167.5
Austria 6.809 9 11.6 65.2 13.6
Bulgaria 5.942 6     8.5
Hungary 4.669 10 9.0 51.9 23.3
Belgium 3.094 12 13.6 19.0 5.2
EUROPA 1.850.934 610.6      
OTHER          
Australia 60.177 26     40.1
Morocco 5.014 39     50.1

Table 1. HPC availability per researcher and relative to GDP in the TOP500 countries (includes HPC in industry).

The local reality is far from this data. Table 2 shows data from Argentina, Brazil, Chile and Mexico. In Chile, the availability of computing power is 2-3 times less per researcher than in countries with less computing power in the OECD and up to 100 times less than a researcher in the US. In Chile, our investment measured in TFlops per million US$ of GDP is 166 times less than in the US; with respect to European countries that invest less in HPC it is 9 times less, and with respect to the European average (including Finland) it is 80 times less, i.e. the difference is considerable. It is clear that we need to close this gap. An investment go about 5 million dollars in HPC infrastructure in the next 5 years would close this gap by a factor of almost 20 times our computational capacity. However, returning to the example of Spain, the supercomputer it will have this year will offer 23 times more computing power than at present and, therefore, we will only maintain our relative distance. If we do not invest, the dap will increase by at least 23 times and will end up being huge. Therefore, we do not only need a one-time investment, but we need to ensure a regular investment. Some neighbouring countries are already investing significantly in supercomputing. This is the case in Argentina, where they are investing 7 million dollars (2 million for the datacenter and 5 million to buy a new supercomputer), which will increase their current capacities by almost 40 times(5).

Country Rpeak 

(TFlops)

Population (millions) Res/1000 GFlops/Res Tflops/M US$
Brazil* 3.000 216 1.1  12.6 1.8
Mexico 2.200 130 1.2 14.1 1.8
Argentina 400 45 1.2 7.4  0.8
Chile 250 20 1.3 9.6 0.8

Table 2. HPC availability per researcher and relative to GDP in the region (*only HPC capacity in academia is considered in this table).

For the above reasons, we are working to convince the Chilean authorities that we must have greater funding and, more crucially, permanent state funding in HPC. In relation to this, on July 6 we signed a collaboration agreement between 44 institutions with the support of the Ministry of Science to work on the creation of the National Supercomputing Laboratory(6). The agreement recognised that supercomputers are a critical infrastructure for Chile’s development, that it is necessary to centralise the requirements/resources at the national level, obtain permanent funding from the State and create a new institutional framework to provide governance. In an unprecedented inter-institutional collaboration in Chile, the competition for HPC resources at the national level is eliminated ad the possibility of direct funding from the State is opened up without generating controversy.

Undoubtedly, supercomputing is a fundamental pillar for the development of any country, where increasing investment provides a strategic advantage, and in Latin America we should not be left behind.

By NLHPC

 

References

(1) Hyperion Research HPC Investments Bring High Returns

(2) EESI-2 Special Study To Measure And Model How Investments In HPC Can Create Financial ROI And Scientific Innovation In Europe 

(3) https://top500.org/ 

(4) https://www.lavanguardia.com/ciencia/20230129/8713515/llega-superordenador-marenostrum-5-bsc-barcelona.html

(5) https://www.hpcwire.com/2022/12/15/argentina-announces-new-supercomputer-for-national-science/

(6) https://uchile.cl/noticias/187955/44-instituciones-crearan-el-laboratorio-nacional-de-supercomputacion

 

The post Towards a greater HPC capacity in Latin America first appeared on RISC2 Project.

]]>
Advanced Computing Collaboration to Growth Sustainable Ecosystems https://www.risc2-project.eu/2022/12/12/advanced-computing-collaboration-to-growth-sustainable-ecosystems/ Mon, 12 Dec 2022 10:45:48 +0000 https://www.risc2-project.eu/?p=2612 The impact of High-Performance Computing (HPC) in different contexts related to the needs of high capabilities and strategies to simulate or to compute is very known. In the development of the RISC2 project, observing the project’s main goals, it is not a potential impact to support scientific challenges recognised after the exploration but an essential […]

The post Advanced Computing Collaboration to Growth Sustainable Ecosystems first appeared on RISC2 Project.

]]>
The impact of High-Performance Computing (HPC) in different contexts related to the needs of high capabilities and strategies to simulate or to compute is very known. In the development of the RISC2 project, observing the project’s main goals, it is not a potential impact to support scientific challenges recognised after the exploration but an essential requirement for scientific, productive, and social activities. Different outcomes are presented in the academic spaces as the workshops and main tracks of the Latin American Conference on High-Performance Computing (CARLA 2023). In these spaces, different RISC2 proposals show how HPC allows competitiveness, demands collaboration to attack global interests, and guarantees sustainability.

In the European and Latin American (EuroLatAm) HPC ecosystems, it tis possible to identify actors in different domains: industry, academy, research, society, and government. Each of them, at different levels, has a group of demands or interactions, depending on the interests. I.e., the industry demands capabilities to have HPC solutions for productivity and wants skills from the academy to perform development actors to build applications to use solutions. Another example could be the relationship between research and the government. In the HPC Ecosystem, collaborations allow synergies to face common interests. Still, it demands policies and coordinated roadmaps to support long-term projects and activities with a clear impact on society.

Of course, a historical relationship exists between Latin America and Europe from colonial history. In the case of advanced computing projects, it is possible to identify, from the first EuroLatAm Grid Computing projects more than twenty years ago until the real supercomputing projects such as RISC and RISC2. Still, now, more with shared interests and the different EuroLatAm HPC projects improve competitiveness and collaboration. Competitiveness for industrial and productive business, partnership (and competitiveness) in science and education goals, and human wellness. So paraphrasing Mateo Valero “who does not compute does not compete”, I would add “who does not collaborate does not survive”.

Taking collaboration and competitiveness, the RISC2 project allows identifying sustainability elements and sustainable workflows for different projects. The impressive interaction between the actors of the HPC EuroLatAm ecosystem has not only given scientific results but also policies, recommendations, best practices, and new questions. For these outcomes, in the past 2022 Supercomputing Conference, RISC2 was awarded the 2022 HPCWire Editors’ Choice Award as the Best HPC Collaboration.

Sustainable advanced computing ecosystems and their growth are evident with the knowledge of the results of projects such as RISC2. Collaboration, interaction, and competitiveness build human development and guarantee development, technological diversification, and peer-to-peer relationships to attack common interests and problems. So, RISC2 is a crucial step to advance to a RISC3 as it was at the time of the previous RISC.

 

By Universidad Industrial de Santander

The post Advanced Computing Collaboration to Growth Sustainable Ecosystems first appeared on RISC2 Project.

]]>
RISC2: enabling cross-continental HPC collaboration for society’s evolution https://www.risc2-project.eu/2022/11/17/risc2-enabling-cross-continental-hpc-collaboration-for-societys-evolution/ Thu, 17 Nov 2022 13:18:01 +0000 https://www.risc2-project.eu/?p=2574 Leveraging HPC technologies to unravel epidemic dynamics https://www.risc2-project.eu/2022/10/17/leveraging-hpc-technologies-to-unravel-epidemic-dynamics/ Mon, 17 Oct 2022 08:10:17 +0000 https://www.risc2-project.eu/?p=2419 When we talk about the 14th century, we probably are making reference to one of the most adverse periods of human history. It was an era of regular armed conflicts, declining social systems, famine, and disease. It was the time of the bubonic plague pandemics, the Black Death, that wiped out millions of people in […]

The post Leveraging HPC technologies to unravel epidemic dynamics first appeared on RISC2 Project.

]]>
When we talk about the 14th century, we probably are making reference to one of the most adverse periods of human history. It was an era of regular armed conflicts, declining social systems, famine, and disease. It was the time of the bubonic plague pandemics, the Black Death, that wiped out millions of people in Europe, Africa, and Asia [1].

Several factors contributed to the catastrophic outcomes of the Black Death. The crises was boosted by the lack of two important components: knowledge and technology. There was no clue about the spread dynamics of the disease, and containment policies were desperately based on assumptions or beliefs. Some opted for self-isolation to get away from the bad airthat was believed to be the cause of the illness [2]. Others thought the plague was a divine punishment and persecuted the heretics in order to appease the heavens[3]. Though the first of these two strategies was actually very effective, the second one only increased the tragedy of that scenario. 

The bubonic plague of the 14th century is a great example of how unfortunate ignorance can be in the context of epidemics. If the transmission mechanisms are not well-understood, we are not able to design productive measures against them. We may end up such as our medieval predecessors making things much more worse. Fortunately, the advances in science and technology have provided humanity with powerful tools to comprehend infectious diseases and rapidly develop response plans. In this particular matter, epidemic models and simulations have become crucial. 

In the recent COVID-19 events, many public health authorities relied on the outcomes of models, so as to determine the most probable paths of the epidemic and make informed decisions regarding sanitary measures [4]. Epidemic models have been around for a long time, and have become more and more sophisticated. One reason is the fact that they feed on data that has to be collected and processed, and which has increased in quantity and variety.  

Data contains interesting patterns that give hints about the influence of apparently non-epidemiological factors such as mobility and interaction type [5]. This is how, in the 19th century, John Snow managed to discover the cause of a cholera epidemic in Soho. He plotted the registered cholera cases in a map and saw they clustered around a water pump that he presumed was contaminated [6]. Thanks to Dr. Snow’s findings, water quality started to be considered as an important component of public health. 

As models grow in intricacy, the demand for more powerful computing systems also increases. In advanced approaches such as agent-based [7] and network (graph) models [8], every person is represented inside a complex framework in which the infection spreads according to specific rules. These rules could be related to the nature of the relations between individuals, their number of contacts, the places they visit, disease characteristics, and even stochastic influences. Frameworks are commonly composed of millions of individuals too, because we often want to analyze countrywide effects. 

In brief, to unravel epidemic dynamics we need to process and produce a lot of accurate information, and we need to do it fast. High-performance computing (HPC) systems provide high-spec hardware and support advanced techniques such as parallel computing, which accelerate calculation by using several resources at a time to perform one or different tasks concurrently. This is an advantage for stochastic epidemic models that require hundreds of independent executions to deliver reliable outputs. Frameworks with millions of nodes or agents need several GB of memory to be processed, which is a requirement that can be met only by HPC systems. 

Based on the work of Cruz et al. [9], we developed a model that represents the spread dynamics of COVID-19 in Costa Rica [10]. This model consists of a contact network of five million nodes, in which every Costa Rican citizen has a family, school, work, or random connection with their neighbors. These relations impact the probability of getting infected, as well as the infection statusof the neighbors. The infection status varies with time, as people evolve from not having symptoms to have mild, severe, or critical conditions. People may be asymptomatic as well. The model also addresses variations in location, school and workplace sizes, age, mobility, and vaccination rates. In addition, some of these inputs are stochastic. 

Such model takes only a few hours to be simulated in an HPC cluster, when normal systems would require much more time. We managed to evaluate scenarios in which different sanitary measures were changed or eliminated. This analysis brought interesting results, such as that going to a meeting with our family or friends could be as harmful as attending a concert with dozens of strangers, in terms of the additional infections that these activities would generate. Such findings are valuable inputs for health authorities, because they demonstrate that preventing certain behaviors in the population can delay the peak of infections and give them more time to save lives. 

Even though HPC has been fundamental in computational epidemiology to give key insights into epidemic dynamics, we still have to leverage this technology in some contexts. For example, we must first strengthen health and information systems in developing countries to get the maximum advantage of HPC and epidemic models. The above can be achieved through interinstitutional and international collaboration, but also through national policies that support research and development. If we encourage the study of infectious diseases, we benefit from this knowledge in a way that we can approach other pandemics better in the future. 

 

References

[1] Encyclopedia Britannica. n.d. Crisis, recovery, and resilience: Did the Middle Ages end?. [online] Available at: <https://www.britannica.com/topic/history-of-Europe/Crisis-recovery-and-resilience-Did-the-Middle-Ages-end> [Accessed 13 September 2022]. 

[2] Mellinger, J., 2006. Fourteenth-Century England, Medical Ethics, and the Plague. AMA Journal of Ethics, 8(4), pp.256-260. 

[3] Carr, H., 2020. Black Death Quarantine: How Did We Try To Contain The Deadly Disease?. [online] Historyextra.com. Available at: <https://www.historyextra.com/period/medieval/plague-black-death-quarantine-history-how-stop-spread/> [Accessed 13 September 2022]. 

[4] McBryde, E., Meehan, M., Adegboye, O., Adekunle, A., Caldwell, J., Pak, A., Rojas, D., Williams, B. and Trauer, J., 2020. Role of modelling in COVID-19 policy development. Paediatric Respiratory Reviews, 35, pp.57-60. 

[5] Pasha, D., Lundeen, A., Yeasmin, D. and Pasha, M., 2021. An analysis to identify the important variables for the spread of COVID-19 using numerical techniques and data science. Case Studies in Chemical and Environmental Engineering, 3, p.100067. 

[6] Bbc.co.uk. 2014. Historic Figures: John Snow (1813 – 1858). [online] Available at: <https://www.bbc.co.uk/history/historic_figures/snow_john.shtml> [Accessed 13 September 2022]. 

[7] Publichealth.columbia.edu. 2022. Agent-Based Modeling. [online] Available at: <https://www.publichealth.columbia.edu/research/population-health-methods/agent-based-modeling> [Accessed 13 September 2022]. 

[8] Keeling, M. and Eames, K., 2005. Networks and epidemic models. Journal of The Royal Society Interface, 2(4), pp.295-307. 

[9] Cruz, E., Maciel, J., Clozato, C., Serpa, M., Navaux, P., Meneses, E., Abdalah, M. and Diener, M., 2021. Simulation-based evaluation of school reopening strategies during COVID-19: A case study of São Paulo, Brazil. Epidemiology and Infection, 149. 

[10] Abdalah, M., Soto, C., Arce, M., Cruz, E., Maciel, J., Clozato, C. and Meneses, E., 2022. Understanding COVID-19 Epidemic in Costa Rica Through Network-Based Modeling. Communications in Computer and Information Science, pp.61-75. 

 

By CeNAT

The post Leveraging HPC technologies to unravel epidemic dynamics first appeared on RISC2 Project.

]]>
RISC2 partner is a member of AISIS 2021’s Scientific Committee https://www.risc2-project.eu/2021/11/23/risc2-partner-is-a-member-of-aisis-2021s-scientific-committee/ Tue, 23 Nov 2021 16:18:30 +0000 https://www.risc2-project.eu/?p=1414 Rafael Mayo Garcia, from CIEMAT, one of the RISC2 partners, participated at AISIS 2021 as a part of the Scientific Committee, from the 11th to the 15th of October 2021. Rafael Mayo Garcia joined the scientific committee at the Artificial Intelligence for Science, Industry and Society (AISIS) 2021. AISIS is a conference that brings together […]

The post RISC2 partner is a member of AISIS 2021’s Scientific Committee first appeared on RISC2 Project.

]]>
Rafael Mayo Garcia, from CIEMAT, one of the RISC2 partners, participated at AISIS 2021 as a part of the Scientific Committee, from the 11th to the 15th of October 2021.

Rafael Mayo Garcia joined the scientific committee at the Artificial Intelligence for Science, Industry and Society (AISIS) 2021.

AISIS is a conference that brings together scientists, industry representatives and policy makers and discusses the implementation of AI in a variety of areas and disciplines. This year’s edition had a great focus on how AI has facilitated the global response to the COVID-19 pandemic. Hosted online, the event took place at National Autonomous University of Mexico (UNAM).

According to Rafael Mayo Garcia, he worked “on the definition of the agenda and the review of contributions” with different members from around the world. The program and agenda in which RISC2’s partner had an important role in was composed by several keynote speakers, topics and convenors.

Learn more about this event and Rafael Mayo Garcia’s role in it here.

The post RISC2 partner is a member of AISIS 2021’s Scientific Committee first appeared on RISC2 Project.

]]>
Second Symposium on Artificial Intelligence for Science, Industry and Society https://www.risc2-project.eu/events/second-symposium-on-artificial-intelligence-for-science-industry-and-society/ Tue, 16 Nov 2021 10:28:58 +0000 https://www.risc2-project.eu/?post_type=mec-events&p=1395 The second edition of AISIS will be hosted online by the National Autonomous University of Mexico (UNAM), the largest university in Latin America, from the 11th to the 15th of October 2021. Only registered participants will have access to the event and to some part of the material presented.

The post Second Symposium on Artificial Intelligence for Science, Industry and Society first appeared on RISC2 Project.

]]>
The second edition of AISIS will be hosted online by the National Autonomous University of Mexico (UNAM), the largest university in Latin America, from the 11th to the 15th of October 2021. Only registered participants will have access to the event and to some part of the material presented.

The post Second Symposium on Artificial Intelligence for Science, Industry and Society first appeared on RISC2 Project.

]]>