germany - RISC2 Project https://www.risc2-project.eu Fri, 24 Feb 2023 15:36:39 +0000 en-US hourly 1 https://wordpress.org/?v=6.6.2 Towards a greater HPC capacity in Latin America https://www.risc2-project.eu/2023/02/24/towards-a-greater-hpc-capacity-in-latin-america/ Fri, 24 Feb 2023 15:36:39 +0000 https://www.risc2-project.eu/?p=2739 High-Performance Computing (HPC) has proven to be a strong driver for science and technology development, and is increasingly considered indispensable for most scientific disciplines. HPC is making a difference in key topics of great interest such as climate change, personalised medicine, engineering, astronomy, education, economics, industry and public policy, becoming a pillar for the development […]

The post Towards a greater HPC capacity in Latin America first appeared on RISC2 Project.

]]>
High-Performance Computing (HPC) has proven to be a strong driver for science and technology development, and is increasingly considered indispensable for most scientific disciplines. HPC is making a difference in key topics of great interest such as climate change, personalised medicine, engineering, astronomy, education, economics, industry and public policy, becoming a pillar for the development of any country, and to which the great powers are giving strategic importance and investing billions of dollars, in competition without limits where data is the new gold.

A country that does not have the computational capacity to solve its own problems will have no alternative but to try to acquire solutions provided by others. One of the most important aspects of sovereignty in the 21st century is the ability to produce mathematical models and to have the capacity to solve them. Today, the availability of computing power commensurate with one’s wealth exponentially increases a country’s capacity to produce knowledge. in the developed world, it is estimated that for every dollar invested in supercomputing, the return to society is of the order of US$ 44(1) and to the academic world US$ 30(2). For these reasons, HPC occupies an important place on the political and diplomatic agendas of developed countries. 

In Latin America, investment in HPC is very low compared to what’s the US, Asia and Europe are doing. In order to quantify this difference, we present the tables below, which show the accumulated computing capacity in the ranking of the 500 most powerful supercomputers in the world – the TOP500(3) – (Table 1), and the local reality (Table 2). Other data are also included, such as the population (in millions), the number of researchers per 1,000 inhabitants (Res/1000), the computing capacity per researcher (Gflops/Res) and the computing capacity per US$ million of GPD. In Table 1, we have grouped the countries by geographical area. America appears as the area with the highest computing capacity, essentially due to the USA, which has almost 45% of the world’s computing capacity in the TOP500. It if followed by Asia and then Europe. Tis TOP500 list includes mainly academic research centres, but also industry ones, typically those used in applied research (many private ones do not wish to publish such information for obvious reasons). For example, in Brazil – which shows good computing capacity with 88,175 TFlops – the vast majority is in the hands of the oil industry and only about 3,000 TFlops are used for basic research. Countries listed in the TOP500 invest in HPC from a few TFlops per million GDP (Belgium 5, Spain 7, Bulgaria 8), through countries investing in the order of hundreds (Italy 176, Japan 151, USA 138), to even thousands, as is the case in Finland with 1,478. For those countries where we were able to find data on the number of researchers, these range from a few Gflops per researcher (Belgium 19, Spain 24, Hungary 52) to close to 1,000 GFlops, i.e. 1 TFlop (USA 970, Italy 966), with Finland surpassing this barrier with 4,647. Note that, unlike what happens locally, countries with a certain degree of development invest every 3-4 years in supercomputing, so the data we are showing will soon be updated and there will be variations in the list. For example, this year a new supercomputer will come into operation in Spain(4), which, with an investment of some 150 million euros, will give Spain one of the most powerful supercomputers in Europe – and the world.

Country Rpeak 

(TFlops)

Population

(millions)

Res/1000 GFlops/Res Tflops/M US$
United States 3.216.124 335 9.9 969.7 138.0
Canada 71.911 39 8.8 209.5 40.0
Brazil 88.175 216 1.1 371.1  51.9
AMERICA 3.376.211 590      
           
China 1.132.071 1400     67.4
Japan 815.667 124 10.0 657.8 151.0
South Korea 128.264 52 16.6 148.6 71.3
Saudi Arabia 98.982 35     141.4
Taiwan 19.562 23     21.7
Singapore 15.785 6     52.6
Thailand 13.773 70     27.5
United Arab Emirates 12.164 10     15.2
India 12.082 1380     4.0
ASIA 2.248.353 3100      
           
Finland 443.391 6 15.9 4647.7 1478.0
Italy 370.262 59 6.5 965.5 176.3
Germany 331.231 85 10.1 385.8 78.9
France 251.166 65 11.4 339.0 83.7
Russia 101.737 145     59.8
United Kingdom 92.563 68 9.6 141.8 29.9
Netherlands 56.740 18 10.6 297.4 56.7
Switzerland 38.600 9 9.4 456.3 48.3
Sweden 32.727 10 15.8 207.1 54.5
Ireland 26.320 5 10.6 496.6 65.8
Luxembourg 18.291 0.6     365.8
Poland 17.099 38 7.6 59.2 28.5
Norway 17.031 6 13.0 218.3 34.1
Czech Republic 12.914 10 8.3 155.6 43.0
Spain 10.296 47 7.4 29.6 7.4
Slovenia 10.047 2 9.9 507.4 167.5
Austria 6.809 9 11.6 65.2 13.6
Bulgaria 5.942 6     8.5
Hungary 4.669 10 9.0 51.9 23.3
Belgium 3.094 12 13.6 19.0 5.2
EUROPA 1.850.934 610.6      
OTHER          
Australia 60.177 26     40.1
Morocco 5.014 39     50.1

Table 1. HPC availability per researcher and relative to GDP in the TOP500 countries (includes HPC in industry).

The local reality is far from this data. Table 2 shows data from Argentina, Brazil, Chile and Mexico. In Chile, the availability of computing power is 2-3 times less per researcher than in countries with less computing power in the OECD and up to 100 times less than a researcher in the US. In Chile, our investment measured in TFlops per million US$ of GDP is 166 times less than in the US; with respect to European countries that invest less in HPC it is 9 times less, and with respect to the European average (including Finland) it is 80 times less, i.e. the difference is considerable. It is clear that we need to close this gap. An investment go about 5 million dollars in HPC infrastructure in the next 5 years would close this gap by a factor of almost 20 times our computational capacity. However, returning to the example of Spain, the supercomputer it will have this year will offer 23 times more computing power than at present and, therefore, we will only maintain our relative distance. If we do not invest, the dap will increase by at least 23 times and will end up being huge. Therefore, we do not only need a one-time investment, but we need to ensure a regular investment. Some neighbouring countries are already investing significantly in supercomputing. This is the case in Argentina, where they are investing 7 million dollars (2 million for the datacenter and 5 million to buy a new supercomputer), which will increase their current capacities by almost 40 times(5).

Country Rpeak 

(TFlops)

Population (millions) Res/1000 GFlops/Res Tflops/M US$
Brazil* 3.000 216 1.1  12.6 1.8
Mexico 2.200 130 1.2 14.1 1.8
Argentina 400 45 1.2 7.4  0.8
Chile 250 20 1.3 9.6 0.8

Table 2. HPC availability per researcher and relative to GDP in the region (*only HPC capacity in academia is considered in this table).

For the above reasons, we are working to convince the Chilean authorities that we must have greater funding and, more crucially, permanent state funding in HPC. In relation to this, on July 6 we signed a collaboration agreement between 44 institutions with the support of the Ministry of Science to work on the creation of the National Supercomputing Laboratory(6). The agreement recognised that supercomputers are a critical infrastructure for Chile’s development, that it is necessary to centralise the requirements/resources at the national level, obtain permanent funding from the State and create a new institutional framework to provide governance. In an unprecedented inter-institutional collaboration in Chile, the competition for HPC resources at the national level is eliminated ad the possibility of direct funding from the State is opened up without generating controversy.

Undoubtedly, supercomputing is a fundamental pillar for the development of any country, where increasing investment provides a strategic advantage, and in Latin America we should not be left behind.

By NLHPC

 

References

(1) Hyperion Research HPC Investments Bring High Returns

(2) EESI-2 Special Study To Measure And Model How Investments In HPC Can Create Financial ROI And Scientific Innovation In Europe 

(3) https://top500.org/ 

(4) https://www.lavanguardia.com/ciencia/20230129/8713515/llega-superordenador-marenostrum-5-bsc-barcelona.html

(5) https://www.hpcwire.com/2022/12/15/argentina-announces-new-supercomputer-for-national-science/

(6) https://uchile.cl/noticias/187955/44-instituciones-crearan-el-laboratorio-nacional-de-supercomputacion

 

The post Towards a greater HPC capacity in Latin America first appeared on RISC2 Project.

]]>
JUPITER Ascending – First European Exascale Supercomputer Coming to Jülich https://www.risc2-project.eu/2023/01/02/jupiter-ascending-first-european-exascale-supercomputer-coming-to-julich/ Mon, 02 Jan 2023 12:14:22 +0000 https://www.risc2-project.eu/?p=2637 It was finally decided in 2022: Forschungszentrum Jülich will be home to Europe’s first exascale computer. The supercomputer is set to be the first in Europe to surpass the threshold of one trillion (“1” followed by 18 zeros) calculations per second. The system will be acquired by the European supercomputing initiative EuroHPC JU. The exascale computer […]

The post JUPITER Ascending – First European Exascale Supercomputer Coming to Jülich first appeared on RISC2 Project.

]]>
It was finally decided in 2022: Forschungszentrum Jülich will be home to Europe’s first exascale computer. The supercomputer is set to be the first in Europe to surpass the threshold of one trillion (“1” followed by 18 zeros) calculations per second. The system will be acquired by the European supercomputing initiative EuroHPC JU. The exascale computer should help to solve important and urgent scientific questions regarding, for example, climate change, how to combat pandemics, and sustainable energy production, while also enabling the intensive use of artificial intelligence and the analysis of large data volumes. The overall costs for the system amount to 500 million euros. Of this total, 250 million euros is being provided by EuroHPC JU and a further 250 million euros in equal parts by the German Federal Ministry of Education and Research (BMBF) and the Ministry of Culture and Science of the State of North Rhine-Westphalia (MKW NRW).

The computer named JUPITER (short for “Joint Undertaking Pioneer for Innovative and Transformative Exascale Research”) will be installed 2023/2024 on the campus of Forschungszentrum Jülich. It is intended that the system will be operated by the Jülich Supercomputing Centre (JSC), whose supercomputers JUWELS and JURECA currently rank among the most powerful in the world. JSC has participated in the application procedure for a high-end supercomputer as a member of the Gauss Centre for Supercomputing (GCS), an association of the three German national supercomputing centres JSC in Jülich, High Performance Computing Stuttgart (HLRS), and Leibniz Computing Centre (LRZ) in Garching. The competition was organized by the European supercomputing initiative EuroHPC JU, which was formed by the European Union together with European countries and private companies. 

JUPITER is now set to become the first European supercomputer to make the leap into the exascale class. In terms of computing power, it will be more powerful that 5 million modern laptops of PCs. Just like Jülich’s current supercomputer JUWELS, JUPITER will be based on a dynamic, modular supercomputing architecture, which Forschungszentrum Jülich developed together with European and international partners in the EU’s DEEP research projects.

In a modular supercomputer, various computing modules are coupled together. This enables program parts of complex simulations to be distributed over several modules, ensuring that the various hardware properties can be optimally utilized in each case. Its modular construction also means that the system is well prepared for integrating future technologies such as quantum computing or neurotrophic modules, which emulate the neural structure of a biological brain.

Figure Modular Supercomputing Architecture: Computing and storage modules of the exascale computer in its basis configuration (blue) as well as optional modules (green) and modules for future technologies (purple) as possible extensions. 

In its basis configuration, JUPITER will have and enormously powerful booster module with highly efficient GPU-based computation accelerators. Massively parallel applications are accelerated by this booster in a similar way to a turbocharger, for example to calculate high-resolution climate models, develop new materials, simulate complex cell processes and energy systems, advanced basic research, or train next-generation, computationally intensive machine-learning algorithms.

One major challenge is the energy that is required for such large computing power. The average power is anticipated to be up to 15 megawatts. JUPITER has been designed as a “green” supercomputer and will be powered by green electricity. The envisaged warm water cooling system should help to ensure that JUPITER achieves the highest efficiency values. At the same time, the cooling technology opens up the possibility of intelligently using the waste heat  that is produced. For example, just like its predecessor system JUWELS, JUPITER will be connected to the new low-temperature network on the Forschungszentrum Jülich campus. Further potential applications for the waste heat from JUPITER are currently being investigated by Forschungszentrum Jülich.

By Jülich Supercomputing Centre (JSC)

 

The first image is JUWELS: Germany’s fastest supercomputer JUWELS at Forschungszentrum Jülich, which is funded in equal parts by the Federal Ministry of Education and Research (BMBF) and the Ministry of Culture and Science of the State of North Rhine-Westphalia (MKW NRW) via the Gauss Centre for Supercomputing (GCS). (Copyright: Forschungszentrum Jülich / Sascha Kreklau)

The post JUPITER Ascending – First European Exascale Supercomputer Coming to Jülich first appeared on RISC2 Project.

]]>
RISC2 organized a workshop co-located with IEEE Cluster 2022 https://www.risc2-project.eu/2022/09/21/risc2-organized-a-workshop-co-located-with-ieee-cluster-2022/ Wed, 21 Sep 2022 07:55:00 +0000 https://www.risc2-project.eu/?p=2359 RISC2, in collaboration with EU-LAC ResInfra, organized the workshop “HPC for International Collaboration between Europe and Latin America”, in conjunction with IEEE Cluster 2022 Conference in Heidelberg, Germany. About 15 people participated in the workshop, which took place on September 6, 2022. The workshop aimed to exchange experiences, results, and best practices of collaboration initiatives […]

The post RISC2 organized a workshop co-located with IEEE Cluster 2022 first appeared on RISC2 Project.

]]>
RISC2, in collaboration with EU-LAC ResInfra, organized the workshop “HPC for International Collaboration between Europe and Latin America”, in conjunction with IEEE Cluster 2022 Conference in Heidelberg, Germany. About 15 people participated in the workshop, which took place on September 6, 2022.

The workshop aimed to exchange experiences, results, and best practices of collaboration initiatives between Europe and Latin America, in which HPC was essential, and to discuss how to work towards sustainability by reinforcing the bridges between the HPC communities in both regions. The workshop was organized by our partners Esteban Meneses from CeNAT, Fabrizio Gagliardi from BSC, Bernd Mohr from JSC, Carlos J. Barrios H. from UIS, and Rafael Mayo-Gacía from CIEMAT.

The workshop was opened with a keynote by Daniele Lezzi from BSC who reviewed the EU-LATAM collaboration on HPC. Six more presentations highlighted research work from Latin America and collaborative work between organizations on both continents. More information about the workshop including a detailed program can be found here.

 

 

The RISC2 project supported the IEEE Cluster Conference, a major international forum for presenting and sharing recent accomplishments and technological developments in the field of cluster computing, as well as the use of cluster systems for scientific and commercial applications by organizing a networking event at the end of the workshop day.

Our partner Esteban Meneses, from National High Technology Center in Costa Rica and one of the RISC2 partners, was one of the Publicity Co-Chairs of the IEEE Cluster 2022 Conference.

 

 

The post RISC2 organized a workshop co-located with IEEE Cluster 2022 first appeared on RISC2 Project.

]]>
Webinar: Application Benchmarking with JUBE: Lessons Learned https://www.risc2-project.eu/events/webinar-3-application-benchmarking-with-jube-lessons-learned/ Tue, 26 Jul 2022 12:36:09 +0000 https://www.risc2-project.eu/?post_type=mec-events&p=2244 Date: October 19, 2022 | 4 p.m. (UTC+1) Speaker: Marc-André Hermanns, RWTH Aachen Moderator: Bernd Mohr, Jülich Supercomputer Centre JUBE can help in the automating application benchmarking on a given platform. JUBE’s features in automatic sandboxing and parameter-space creation can assist to easily sweep build and runtime parameters for an application on a given platform to identify the […]

The post Webinar: Application Benchmarking with JUBE: Lessons Learned first appeared on RISC2 Project.

]]>

Date: October 19, 2022 | 4 p.m. (UTC+1)

Speaker: Marc-André Hermanns, RWTH Aachen

Moderator: Bernd Mohr, Jülich Supercomputer Centre

JUBE can help in the automating application benchmarking on a given platform. JUBE’s features in automatic sandboxing and parameter-space creation can assist to easily sweep build and runtime parameters for an application on a given platform to identify the best build and run configuration.

This talk provides some lessons learned in building a JUBE-based benchmark Suite for the RWTH Aachen University Job-Mix that reduces redundancy of information and allows for easy integration of future applications. It will specifically address advanced features for parameter settings, parameter inheritance, and some tips and tricks to overcome some of its limitations.

About the speaker: Marc-André Hermanns is a member of the HPC group at the IT Center of RWTH Aachen University. His research focuses on tools and interfaces for the performance analysis of parallel applications. He has been involved in the design and implementation of various courses on topics of parallel programming for high-performance computing. Next to supporting HPC users as part of the competence network for high-performance computing in North-Rhinewestphalia (HPC.NRW), he also contributes to the development of online tutorials and courses within the competence network. He is a long time user and advocator for JUBE and created configurations for various applications and benchmarks, both for classical system benchmarking, as well as integration of performance analysis tools in such workflows.

About the moderator: Bernd Mohr started to design and develop tools for performance analysis of parallel programs already with his diploma thesis (1987) at the University of Erlangen in Germany, and continued this in his Ph.D. work (1987 to 1992).  During a three year postdoc position at the University of Oregon, he designed and implemented the original TAU performance analysis framework. Since 1996 he has been a senior scientist at Forschungszentrum Juelich. Since 2000, he has been the team leader of the group ”Programming Environments and Performance Analysis”. Besides being responsible for user support and training in regard to performance tools at the Juelich Supercomputing Centre (JSC), he is leading the Scalasca performance tools efforts in collaboration with Prof.  Felix Wolf of TU Darmstadt. Since 2007, he has also served as deputy head for the JSC division ”Application support”. He was an active member in the International Exascale Software Project (IESP/BDEC) and work package leader in the European (EESI2) and Juelich (EIC, ECL) Exascale efforts.  For the SC and ISC Conference series, he served on the Steering Committee.  He is the author of several dozen conference and journal articles about performance analysis and tuning of parallel programs.

 

Registrations are now closed.

The post Webinar: Application Benchmarking with JUBE: Lessons Learned first appeared on RISC2 Project.

]]>
Webinar: Getting Scientific Software Installed: From EasyBuild to EESSI https://www.risc2-project.eu/events/1st-webinar-series-hpc-system-tools/ Tue, 26 Jul 2022 09:53:58 +0000 https://www.risc2-project.eu/?post_type=mec-events&p=2229 Date: August 24, 2022 | 4 p.m. (UTC+1) Speaker: Kenneth Hoste, Ghent University Summary: Over a decade ago, EasyBuild was created at Ghent University to help deal with the burden of getting scientific software installed on HPC infrastructure in an efficient way, with attention to the performance of the resulting software installations. Shortly after making […]

The post Webinar: Getting Scientific Software Installed: From EasyBuild to EESSI first appeared on RISC2 Project.

]]>
Date: August 24, 2022 | 4 p.m. (UTC+1)

Speaker: Kenneth Hoste, Ghent University

Summary: Over a decade ago, EasyBuild was created at Ghent University to help deal with the burden of getting scientific software installed on HPC infrastructure in an efficient way, with attention to the performance of the resulting software installations.

Shortly after making EasyBuild available to the world as open source software, a helpful community started growing around it who drives and actively participates in the development of EasyBuild. EasyBuild has grown significantly over time, in terms of features and supported software, as well as the community itself.

Due to recent trends in the HPC community (increasing diversity in hardware, rise of the cloud, explosive growth of scientific software from various domains), the need for taking the next step became clear. As a result, the European Environment for Scientific Software Applications (EESSI) project was started in 2020. The main goal of EESSI is to provide a shared central stack of (optimized) scientific software installations which can easily be leveraged on a variety of platforms, including personal workstations, cloud environments, and HPC infrastructure.

 

Kenneth Hoste is a computer scientist and FOSS enthusiast from Belgium. He holds a Masters (2005) and PhD (2010) in Computer Science from Ghent University. His dissertation topic was “Analysis, Estimation and Optimization of Computer System Performance Using Machine Learning”. Since October 2010, he has been a member of the HPC team at Ghent University where he is mainly responsible for user support & training. As a part of his job, he is also the lead developer and release manager of EasyBuild, a software build and installation framework for (scientific) software on High Performance Computing (HPC) systems. In his free time, he is a family guy and a fan of loud music, frequently attending gigs and festivals. He enjoys helping people & sharing his expertise, and likes joking around. He has a weak spot for stickers.

Moderator: Bernd Mohr, Jülich Supercomputer Centre

Bernd Mohr started to design and develop tools for performance analysis of parallel programs already with his diploma thesis (1987) at the University of Erlangen in Germany, and continued this in his Ph.D. work (1987 to 1992).  During a three year postdoc position at the University of Oregon, he designed and implemented the original TAU performance analysis framework. Since 1996 he has been a senior scientist at Forschungszentrum Juelich. Since 2000, he has been the team leader of the group ”Programming Environments and Performance Analysis”. Besides being responsible for user support and training in regard to performance tools at the Juelich Supercomputing Centre (JSC), he is leading the Scalasca performance tools efforts in collaboration with Prof.  Felix Wolf of TU Darmstadt. Since 2007, he has also served as deputy head for the JSC division ”Application support”. He was an active member in the International Exascale Software Project (IESP/BDEC) and work package leader in the European (EESI2) and Juelich (EIC, ECL) Exascale efforts.  For the SC and ISC Conference series, he served on the Steering Committee.  He is the author of several dozen conference and journal articles about performance analysis and tuning of parallel programs.

The post Webinar: Getting Scientific Software Installed: From EasyBuild to EESSI first appeared on RISC2 Project.

]]>
IEEE Cluster 2022 https://www.risc2-project.eu/events/ieee-cluster-2022/ Thu, 19 May 2022 08:13:05 +0000 https://www.risc2-project.eu/?post_type=mec-events&p=2047

The post IEEE Cluster 2022 first appeared on RISC2 Project.

]]>

The post IEEE Cluster 2022 first appeared on RISC2 Project.

]]>
Open call for EU&LAC collaboration https://www.risc2-project.eu/2022/02/04/open-call-for-eulac-collaboration/ Fri, 04 Feb 2022 08:42:41 +0000 https://www.risc2-project.eu/?p=1608 Transnational consortia are invited to submit proposals related to the following 6 topics in the thematic fields of Global Challenges, Health, Biodiversity, and Energy. It is one of the specificities of this joint call that it includes four topics based on sharing large Research Infrastructures: GLOBAL CHALLENGES Global Challenges I – Interactions and integration between […]

The post Open call for EU&LAC collaboration first appeared on RISC2 Project.

]]>
Transnational consortia are invited to submit proposals related to the following 6 topics in the thematic fields of Global Challenges, Health, Biodiversity, and Energy. It is one of the specificities of this joint call that it includes four topics based on sharing large Research Infrastructures:

GLOBAL CHALLENGES

Global Challenges I – Interactions and integration between the climate science, Social Sciences and Humanities (SSH) and other communities

Participating funding agencies from: Austria, Bolivia, Brazil (CONFAP), Dominican Republic, Germany, Panama, Poland, Spain (AEI), Turkey, Uruguay.

Global Challenges II – Cross-cutting digital research infrastructure

Participating funding agencies from: Austria, Bolivia, Brazil (CNPq, CONFAP), Dominican Republic, Germany, Panama, Spain (AEI), Turkey.

HEALTH

Health I – Personalised Medicine

Participating funding agencies from: Austria, Bolivia, Brazil (CNPq, CONFAP), Dominican Republic, Germany, Italy, Panama, Poland, Spain (AEI and ISCIII), Turkey.

Health II – EU-LAC Regional Hubs: Integrating research infrastructures for Health and Disease

Participating funding agencies from: Austria, Bolivia, Brazil (CONFAP), Dominican Republic, Germany, Italy, Panama, Peru, Portugal, Spain (AEI), Turkey, Uruguay.

BIODIVERSITY

Biodiversity and Ecosystem Services Research Infrastructures

Participating funding agencies from: Austria, Bolivia, Brazil (CNPq, CONFAP), Dominican Republic, Germany, Italy, Panama, Peru, Spain (AEI), Turkey.

ENERGY

Interoperability of energy data spaces for an optimized exploitation by producers and prosumers / Research Infrastructures

Participating funding agencies from: Austria, Bolivia, Brazil (CONFAP), Dominican Republic, Germany, Panama, Spain (AEI), Turkey

Applicants searching for potential European and / or Latin American & Caribbean partners are invited to register for free at the online ENRICH in LAC Matchmaking platform. This new platform enables virtual direct contacts and project initiation among RTI focussed researchers, start-ups, companies, soft landing hubs, and other organizations between the LAC region and Europe at any time.

Call documents are available here.

The post Open call for EU&LAC collaboration first appeared on RISC2 Project.

]]>
PRACE 24th Call for Proposals for Project Access https://www.risc2-project.eu/events/teste/ Wed, 29 Sep 2021 10:14:55 +0000 https://www.risc2-project.eu/?post_type=mec-events&p=1253 The 24th Call for Proposals for Project Access offers the possibility to benefit from world-class supercomputers in Europe.

The post PRACE 24th Call for Proposals for Project Access first appeared on RISC2 Project.

]]>
Applicants’ reply to scientific reviews: Mid-January 2022
Submission of Progress / Final Reports for continuation proposals: via the platform on the submission form until 02/11/2021 @ 10:00 Brussels Time
Communication of allocation decision: End of March 2022
Allocation period for awarded proposals: 01/04/2022 – 31/03/2023
Type of Access (*): Single-year Project Access and Multi-year Project Access

(*) All proposals consist of 2 parts: An online form and the ‘Project scope and plan’. Please note that if you wish to continue work on a project that has finished or is ongoing, a new proposal (i.e. a continuation proposal) needs to be submitted via the platform in addition to a final/progress report.

Industry Access: Call 24 offers Principal Investigators from industry the possibility to apply for Single-year access to a special Industry Track which prioritises 10% of the total resources available (see Section 3.1.2 – Eligibility criteria for commercial companies in Call 24 “Terms of Reference” document).

The computer systems (called Tier-0 systems) and their operations that are accessible through PRACE are provided for this 24th call by 5 PRACE hosting members: BSC representing Spain, CINECA representing Italy, ETH Zurich/CSCS representing Switzerland, GCS representing Germany and GENCI representing France.

Scientists and researchers can apply for access to PRACE resources. Industrial users can apply if they have their head offices or substantial R&D activity in Europe.

The Call is open to:
Project Access: Proposals can be based on a 12-months schedule (Single-year Projects), or, on a 24- or 36-months schedule (Multi-year Projects). The allocation of awarded resources is made 1 year at a time with provisional allocations awarded for the 2nd and 3rd year.

IMPORTANT NOTICE FOR MULTI-YEAR PROPOSALS:
Please note that the Partnership for Advanced Computing in Europe (PRACE) aisbl is in a transition phase and cannot guarantee that requested HPC systems in the 24th Call for Project Access will be available for multi-year access (allocations for the 2nd and/or 3rd year).

Additionally, the Call:

  • Reserves 0.5% of the total resources available for this call for Centres of Excellence (CoE) as selected by the European Commission under the E-INFRA-5-2015 call for pro-posals.
  • Includes an Industry Access track that prioritises 10% of the total resources available for this call for proposals for Single-year projects with a Principal Investigator from industry.

The PRACE Access Committee, composed of leading international scientists and engineers, ranks the proposals received and produces a recommendation to award PRACE resources based on scientific and technical excellence.

Call related documents
The following documents form the reference for this call:

  • The Terms of Reference can be found here.
  • The Technical Guidelines for Applicants can be found here.
  • The Word template for the Project Scope and Plan can be found here.
  • The Latex template for the Project Scope and Plan can be found here.
System Architecture Site
(Country)
Core Hours
(node hours)
Minimum request
(core hours)
HAWK* HPE Apollo GCS@HLRS (DE) 345.6 million
(2.7 million)
100 million
Joliot-Curie KNL BULL Sequana X1000 GENCI@CEA (FR) 37.5 million
(0.6 million)
15 million
Joliot-Curie Rome BULL Sequana XH2000 GENCI@CEA (FR) 195.3 million
(1.5 million)
15 million
Joliot-Curie SKL BULL Sequana X1000 GENCI@CEA (FR) 52.9 million
(1.1 million)
15 million
JUWELS Booster* BULL Sequana XH2000 GCS@JSC (DE) 85.2 million
(1.78 million)
7 million
Use of GPUs
JUWELS Cluster* BULL Sequana X1000 GCS@JSC (DE) 35.04 million
(0.73 million)
35 million
Marconi100 IBM Power 9 AC922 Whiterspoon CINECA (IT) 165 million
(1.87 million)
35 million
Use of GPUs
MareNostrum 4* Lenovo System BSC (ES) TBA 30 million
Piz Daint Cray XC50 System ETH Zurich/CSCS (CH) 510 million
(7.5 million)
68 million
Use of GPUs
SuperMUC-NG* Lenovo ThinkSystem GCS@LRZ (DE) TBA 35 million

 

*At the time of opening the call, the volume of resources offered on the corresponding system cannot be definitively confirmed. The final volume is expected to be similar to previous calls and will be announced later.

Click here to apply.

The post PRACE 24th Call for Proposals for Project Access first appeared on RISC2 Project.

]]>