Minisymposia
2nd Sustainable Computing for AI and Data-Intensive Infrastructures
Artificial Intelligence (AI) has become a primary driver of compute demand and energy consumption, with workloads such as large language models and AI-powered computational science applications requiring sustained, infrastructure-scale resources. At the same time, modern data centers operate under increasingly tight power, carbon, and power grid constraints, particularly as they integrate variable renewable energy sources. Together, these trends highlight the growing tension between performance scalability, energy availability, and environmental impact. Addressing these challenges requires integrated efforts from component-level optimization to co-designing computing infrastructures, workloads, and energy systems. This minisymposium examines how sustainability can be embedded by design across AI and data-intensive computing infrastructures. Topics include (1) datacenter-power grid co-design to enable load flexibility under renewable integration, (2) infrastructure-level optimization of AI and AI-powered workloads for improved energy efficiency, (3) hardware-software co-design approaches that explore accelerator design trade-offs for AI and scientific computing, and (4) methods, metrics, and lifecycle assessment tools. The session also highlights the economic and environmental tensions between cost-optimal and emissions-optimal system configurations. By bringing together researchers and practitioners working on architectures, systems, and infrastructure design, this session aims to foster a shared understanding of the design principles needed to support sustainable growth in AI and data-intensive computing.
Organizer(s): Denisa-Andreea Constantinescu (EPFL), Florina Ciorba (University of Basel), and Can Hankendi (Boston University)
Domains: Engineering, Computational Methods and Applied Mathematics
3rd High Performance Computing meets Quantum Computing (HPC+QC’26)
Quantum Computing (QC) leverages quantum physical phenomena, such as superposition and entanglement. A mature quantum computer has the potential to solve some exceedingly difficult problems with moderate input sizes efficiently. Still, much work lies ahead before quantum computing can compete with traditional computer technologies, or even successfully integrate and complement them. The challenges lie in the technological manufacturing approach and in the efficient programming of these systems. In contrast, supercomputers and their applications are significantly larger and operate with proven tools that have been developed over decades. In general, a paradigm emerges where quantum computers will not replace traditional supercomputers. Instead, they will become an integral part of supercomputing solutions, acting as an “accelerator”, i.e., specialized to speed up some parts of the application execution. In this respect, this hybrid HPC+QC approach is where real-world applications will find their quantum advantage. This is a rapidly evolving field of research, which requires coordinated integration work and knowledge of both novel quantum algorithm development and traditional parallel programming. This is the third edition of the successful minisymposium we organized at PASC 2024 and PASC 2025.
Organizer(s): Alfio Lazzaro (HPE), and Alessandro Rigazzi (HPE)
Domains: Chemistry and Materials, Engineering, Physics, Computational Methods and Applied Mathematics
Advanced Fluid–Structure Interaction Modeling for Complex Flows in Biomedical and Engineering Applications
High-fidelity simulation of coupled fluid–structure systems is central to predictive modeling across biomedical and industrial applications. This minisymposium presents complementary advances in geometry generation, patient-specific modeling, and strongly coupled fluid–structure–contact interaction methods, with emphasis on robust numerics and scalable implementation. It will highlight advances in coupled multiphysics modeling, and solver technology for flow–structure systems in complex settings. The minisymposium with conclude with a panel discussion on future directions in the field.
Organizer(s): Dominik Obrist (University of Bern)
Domains: Engineering, Life Sciences, Computational Methods and Applied Mathematics
Advancing Atomistic Materials Modeling with GPUs, Novel Algorithms, and Error-Controlled Methods
Advances in computational materials science are increasingly driven by the interplay between high-performance computing (HPC), algorithmic innovation, and electronic-structure theory. First-principles simulations based on density-functional theory (DFT) are now essential across physics, chemistry, and engineering, yet their scalability, accuracy, and reliability face growing challenges on modern heterogeneous and GPU-centric supercomputers. Addressing these challenges requires more than raw computational power; it demands new algorithms, rigorous error control, and hardware-aware software design. This minisymposium brings together researchers from materials science, applied mathematics, and computer science to explore emerging methods that advance atomistic materials modeling in the exascale era. Topics include mathematically rigorous error estimation in DFT, accelerated and robust self-consistent field algorithms for challenging systems, randomized and mixed-precision approaches to large-scale eigenvalue problems, and sustainable porting of electronic-structure codes to modern HPC architectures. By highlighting the co-design of algorithms, numerical methods, and hardware-aware implementations, the session offers an interdisciplinary perspective on how to achieve trustworthy, scalable, and efficient first-principles simulations on next-generation supercomputers.
Organizer(s): Iurii Timrov (Paul Scherrer Institute), Laura Grigori (EPFL, Paul Scherrer Institute), and Michael Herbst (EPFL)
Domains: Chemistry and Materials, Computational Methods and Applied Mathematics
Advancing Medical AI: Challenges for Developing AI-Driven In-Silico Clinical Trials for Accelerating Translational Medicine
Artificial intelligence (AI) has achieved major success in medicine, particularly in diagnostic imaging, pathology classification, and clinical report generation, accelerating research translation and improving care. However, most deployed systems remain task-specific, lack biomedical reasoning, and generalize poorly across data modalities and clinical settings. The Advancing Medical AI minisymposium explores how emerging approaches, especially large multimodal models (LMMs), can enable AI-driven in-silico clinical trials (ISCTs) that better connect research innovation with clinical application. Recent LMMs integrate medical images, text, and structured data to support diagnosis, segmentation, and reporting, enabling the simulation of biological and clinical processes and advancing virtual patient modeling. Key challenges remain in explainability, computational efficiency, privacy protection, and integration with hospital infrastructure, highlighting the need for transparent data governance and verifiable systems. In parallel, ISCTs are gaining momentum as computer-based experiments that model disease progression and therapy response in virtual patient cohorts. Built on digital twins-dynamic computational models continuously updated with clinical data, ISCTs promise lower costs, faster development, and improved safety. Despite their potential, barriers such as data heterogeneity, limited interpretability, validation gaps, regulatory constraints, and infrastructure demands persist.
Organizer(s): John Garcia-Henao (Balgrist University Hospital), and Carlos Barrios Hernandez (Universidad Industrial de Santander, LIG/INRIA – CITI Laboratory)
Domains: Engineering, Life Sciences, Computational Methods and Applied Mathematics
Agentic Workflows for Trustworthy Discovery in Materials Science and Chemistry
Advances in agentic AI—autonomous, tool-using systems that plan, call simulators, reason over uncertainty, and adapt—are poised to transform how we explore vast chemical and materials design spaces. This minisymposium will showcase state-of-the-art methods that couple agentic decision-making with scalable HPC workflows to accelerate hypothesis generation, simulation throughput, and autonomy while strengthening scientific trust. Topics span agent-driven hypothesis generation and assessment, automatic execution of large computational campaigns based on different simulation scales, and workflow/runtime systems that autonomously schedule thousands to millions of tasks across heterogeneous supercomputers with robust provenance and reproducibility. A central thread is Building Trust in Science through HPC Co-Design: contributors will detail how agents, numerical methods, software stacks, data services, are co-designed to deliver validated, reproducible, and/or auditable results using autonomous loops.
Organizer(s): Jan Janssen (Max Planck Institute for Sustainable Materials)
Domains: Chemistry and Materials, Engineering, Physics, Computational Methods and Applied Mathematics
AI and Hardware Acceleration for Computational Biology: Co-Designing Trustworthy and Scalable Life Science Computing
Advances in genomics, proteomics, and molecular modeling have made computational biology one of the most data- and compute-intensive fields of modern science. New discoveries in life science increasingly rely on a combination of AI-driven analysis, large-scale numerical simulation, and heterogeneous high-performance computing (HPC) systems to transform massive datasets into biological insight. At the same time, the HPC ecosystem is undergoing a fundamental shift: hardware accelerators are increasingly optimized for low-precision arithmetic to satisfy dominant AI workloads, while many traditional life science applications, such as molecular dynamics, biomolecular simulation, and population genetics, continue to require high numerical precision, stability, and rigorous validation. Reconciling this growing dichotomy is one of the most urgent challenges facing HPC today. This minisymposium explores how algorithm-software-hardware co‑design can build reliability and transparency directly into accelerated biological computing. The session brings together four prominent speakers from academia, national labs, and leading HPC centers, representing diverse perspectives and covering topics ranging from large-scale pangenomics to molecular dynamics and multi-omics. Together, these talks illustrate how co-design approaches are being applied in practice to reconcile AI acceleration with high-precision scientific computing, and how these insights are shaping the future of high-performance computing beyond the life sciences.
Organizer(s): Gagandeep Singh (AMD), Gina Sitaraman (AMD), Bertil Schmidt (Johannes Gutenberg University Mainz), Kristof Denolf (AMD), Mittul Singh (AMD), and Yijie Xu (AMD)
Domains: Engineering, Life Sciences, Computational Methods and Applied Mathematics
AI: A Panacea or False Promise for Addressing HPC’s Grand Challenges?
The rapid growth of AI models and AI-targeted hardware is significantly impacting everyday life, but it is also creating opportunities for the HPC community to address its current challenges, like the pressure to simulate larger, more complex problems while reducing time-to-solution, and requirements to improve the energy efficiency of both systems and software. This symposium will aim to explore how HPC can leverage AI advancements to tackle complex problems more efficiently and sustainably. The aim is to highlight promising AI developments and strategies to maximize benefits while avoiding overhyped pitfalls while using AI in traditional HPC workloads. The discussion includes AI’s potential in enhancing numerical methods, as well as the adaptation of AI-focused hardware for HPC workloads, as well as AI’s role in optimizing HPC data centres, offering insights into efficiency improvements. Additionally, the potential of Large Language Models (LLMs) to streamline HPC code development and optimization will be discussed, aiming to guide the HPC community in integrating AI technologies effectively, emphasizing real-world applications and challenges to advance their objectives.
Organizer(s): Nick Brown (EPCC), Michèle Weiland (EPCC), Adrian Jackson (EPCC), and Rui Apóstolo (EPCC)
Domains: Computational Methods and Applied Mathematics
Application Perspective on SYCL, a Modern Programming Model for Performance and Portability
HPC and data-intensive computing now stand as the fourth pillar of science. However, as scientific discovery increasingly relies on complex, heterogeneous architectures, primarily GPUs, proprietary programming models and vendor lock-in restrict portability and obscure the transparency essential for reproducible and trustworthy science. SYCL is a vendor-agnostic, C++-based standard for heterogeneous computing with several mature implementations for a wide range of hardware accelerators, offering a promising path towards portable and reproducible high-performance computing. As an open standard, it promotes active, bidirectional interaction among all involved parties: hardware vendors, compiler and runtime developers, standards committees, and application developers. This minisymposium fosters the dialogue between scientific application developers and SYCL implementers, sharing experiences on using SYCL as a collaborative, open-software ecosystem for performance-portable accelerated computing. The aim of this minisymposium is to contribute to the wider adoption of open standards across the scientific computing community.
Organizer(s): Andrey Alekseenko (KTH Royal Institute of Technology, PDC Center for High Performance Computing), and Szilárd Páll (KTH Royal Institute of Technology, PDC Center for High Performance Computing)
Domains: Engineering, Computational Methods and Applied Mathematics
Applications of AI and ML Towards Addressing Magnetic Fusion Challenges
Nuclear fusion is increasingly seen as a credible complement to renewable energy, driven today by both public and growing private investments. The leading fusion concepts, tokamaks and stellarators, rely on magnetic confinement. Fusion plasmas exhibit multiscale electromagnetic instabilities, from machine-scale disruptions to microscopic turbulence. The development of these systems thus requires extensive high-performance computing to understand their complex physics and to address the engineering challenges. Fully kinetic models are prohibitively expensive, so hierarchies of reduced models are needed. Machine learning has therefore become a powerful tool, enabling for example fast surrogate models for plasma equilibrium solvers and transport modules. Such models are also developed for building sub-system modules integrated into comprehensive physics-engineering digital twin environments. In some cases, optimization is such that these models can be integrated into real-time control systems. These different applications of machine learning to magnetic fusion challenges will be addressed in this minisymposium. The talks will provide an overview of the advanced ML techniques applied and should thus be of interest to a broad audience.
Organizer(s): Stephan Brunner (Swiss Plasma Center, EPFL; EPFL), Eric Sonnendruecker (Max Planck Institute for Plasma Physics, Garching), and Florian Hindenlang (Max Planck Institute for Plasma Physics, Garching)
Domains: Physics, Computational Methods and Applied Mathematics
Breaking the HPC Silos: Towards Fairness for Tackling Bias in Algorithms and Data
IDEAS4HPC proposes a minisymposium on bias, fairness, and transparency in high performance computing (HPC), addressing ethical and societal challenges that arise as large-scale computational methods increasingly influence scientific discovery, engineering, and public decision-making. The session brings together speakers at the intersection of HPC, data, ethics, and domain science, highlighting how diversity of perspectives and interdisciplinary collaboration can improve the robustness, reproducibility, and societal impact of computational research. The first presentation offers an early-career perspective through a hands-on case study on AI-driven digital twins for drought early warning in the Alps, illustrating how HPC-enabled AI can support equitable, climate-relevant decision-making and environmental resilience. The second contribution examines the design of fair, carbon-aware, and socially responsible scheduling mechanisms for large-scale HPC infrastructures, drawing on operational experience from the CERN Worldwide LHC Computing Grid and the SKA Science Regional Centre Network. The third presentation explores human-centered approaches to fairness and transparency in AI-powered HPC workflows, focusing on human–AI collaboration, explainable AI, and data visualization to enable bias detection, model understanding, and informed human decision-making. The fourth presentation addresses breaking disciplinary silos through federated, FAIR, and fair approaches to Digital Twins, for domains ranging from health up to social sciences.
Organizer(s): Maria Grazia Giuffreda (ETH Zurich / CSCS), Florina M. Ciorba (University of Basel), Marie-Christine Sawley (ICES Foundation), and Maria Girone (CERN)
Domains: Climate, Weather, and Earth Sciences, Engineering, Life Sciences, Computational Methods and Applied Mathematics
Co-Design of Workflow Orchestration Systems for AI/ML-Enhanced Scientific Workloads on HPC Infrastructure
Scientific workflows increasingly combine traditional simulations with AI/ML methods, creating complex pipelines that must operate across heterogeneous, distributed infrastructure. This minisymposium examines how co-design approaches—bringing together domain scientists, HPC architects, and AI researchers—can address the resulting orchestration challenges. We explore three interconnected themes. First, we examine intelligent workflow generation, including how agentic AI and large language models can assist scientists in developing and optimizing workflows for heterogeneous architectures. Second, we address federated infrastructure services, discussing architectural patterns for unified APIs that enable workflows to seamlessly traverse institutional boundaries while respecting security and data governance constraints. Third, we consider AI-ready data and model services, exploring how federated catalogs, automated curation, and model registries can accelerate discovery. While drawing examples from High Energy Physics—where petabyte-scale data and international collaborations have driven workflow innovation—the patterns we discuss apply broadly to climate science, materials research, genomics, and other data-intensive domains. Our concluding panel examines future directions for co-designed scientific computing systems.
Organizer(s): Ozgur Ozan Kilic (Brookhaven National Laboratory), and Charles Leggett (Lawrence Berkeley National Laboratory)
Domains: Engineering, Physics, Computational Methods and Applied Mathematics
Complex Workflows, Resilience, and Data Management Challenges for Large Scale Experiments
Scientific advancement increasingly relies on the ability to process, transfer, and analyze massive data streams in near real time across distributed and heterogeneous computing systems. Fields such as high energy physics, climate modeling, bioimaging, and materials science face growing demands from high-resolution imaging, sensor-rich experiments, and simulation-driven digital twins, all of which require workflows that are resilient, scalable, and low-latency. This session tackles three core themes. First, resilient data management which addresses reliable movement, cataloging, and access of ever-growing datasets. Second, near–real-time workflows which explore low-latency streaming, analysis, and decision-making, highlighting strategies for heterogeneous architectures. Third, AI-driven modeling and digital twins which enable predictive workflow optimization and co-design of next-generation infrastructures. By connecting domain-specific challenges with generalizable solutions, the session showcases how integrated, intelligent approaches empower scalable, fault-tolerant scientific workflows and foster interdisciplinary collaboration, advancing the future of data-intensive discovery.
Organizer(s): Raees Ahmad Khan (University of Pittsburgh, CERN), Tatiana Korchuganova (University of Pittsburgh, CERN), and Alexei Klimentov (Brookhaven National Laboratory, CERN)
Domains: Engineering, Physics, Computational Methods and Applied Mathematics
Computational Methods in Optimization and Modeling of Stellarators
This minisymposium highlights cutting-edge computational advances in stellarator design. Unlike tokamaks, stellarators achieve plasma confinement through complex 3D magnetic fields, enabling steady-state operation without plasma current. However, their design involves navigating a vast configuration space, requiring high-fidelity physics models and efficient optimization algorithms. This session presents recent breakthroughs in simulation and optimization tools essential for modern stellarator development. Erol Balkovic introduces SPECTRE, a fast solver for general MHD equilibria with arbitrary magnetic topology, including islands and chaos. Robert Köberl discusses the ideal MHD free-boundary equilibrium problem and the GVEC code. Dario Panici presents DESC, a next-generation optimization framework leveraging JAX for automatic differentiation and GPU acceleration. Finally, Robert Davies showcases rapid computational tools for characterizing magnetic topology at the edge, which is critical for divertor design, and their integration into optimization workflows. Together, these talks underpin the need for advanced algorithms in advancing stellarator physics and design.
Organizer(s): Florian Hindenlang (Max Planck Institute for Plasma Physics), Stephan Brunner (EPFL), and Eric Sonnendrücker (Max Planck Institute for Plasma Physics, Technical University of Munich)
Domains: Physics, Computational Methods and Applied Mathematics
Computational Storage for Scientific Computing
The exponential growth of data handled by HPC systems is driven by growing simulation fidelity and an explosion of affordable, high-resolution sensing devices. The rise of machine learning applications and other large-scale data analysis tasks imposes workload patterns on existing storage solutions that often lead to large, avoidable data copies and transfers, as well as contention near storage devices and in the network. This creates challenges for the reproducibility of science and thus limits trust in science. Computational storage is a promising solution to lower barriers to public data access and reduce the time and cost for many workloads that require search across or aggregation of large volumes of data. The session will feature four speakers from academia, government labs, and industry to give their perspectives on the challenges and opportunities of computational storage for scientific computing and how the technology may help to increase trust in science despite the data deluge.
Organizer(s): Jakob Luettgau (INRIA), Kira Duwe (CERN), and Michael Kuhn (Otto-von-Guericke-Universität Magdeburg)
Domains: Computational Methods and Applied Mathematics
Data Management in Scientific Workflows
Data-driven research methods have become essential for many scientific domains, such as earth sciences, high-energy physics, material science, or biomedical engineering. Within these domains, data is collected with different tools, stored in different formats and systems, organized and accessed in typical workflows, and overly-tuned towards the domain requirements. Additionally, the data-management practices by which data is collected, prepared, processed, and shared vary a lot, ranging from highly centralized to small-scale and perhaps siloed approaches. We propose this minisymposium as a forum for experts to share and discuss practices, requirements and challenges for data management. Our aim is to highlight best practices and challenges across the breadth of scientific computing and establish a common understanding of scientific data management, to enable the co-design of HPC methods and tools for a diverse portfolio of data-driven, cross-domain scientific workflows and research infrastructure. This minisymposium will consist of three 25-minute presentations, followed by a Q&A session organized as a panel with the presenters. For all our invited speakers, we gave the same assignment: introduce your research and ideas through the lens of demand and offer for next generation data management for scientific computing. This will be the focus of the panel, as well.
Organizer(s): Florine Willemijn de Geus (CERN, University of Twente), Vincenzo Eduardo Padulano (CERN), and Ana-Lucia Varbanescu (University of Twente)
Domains: Climate, Weather, and Earth Sciences, Life Sciences, Physics, Computational Methods and Applied Mathematics
Data-Driven Regional Weather Modeling: Towards Trustworthy Convection-Resolving Forecasts
Data-driven weather prediction has advanced rapidly in recent years, with machine-learning-based models now complementing traditional numerical weather prediction. While global data-driven models have demonstrated impressive skill, extending these approaches to regional, convection-resolving forecasting introduces new scientific and computational challenges. At kilometer and sub-kilometer scales, models must represent complex physical processes, integrate high-frequency observations, and provide trustworthy uncertainty estimates, particularly for extremes. This minisymposium focuses on recent progress and open challenges in data-driven regional weather modeling. Topics include generative diffusion models for convective-scale downscaling, graph-based neural network architectures for high-resolution domains, training strategies that improve generalization across scales and regions, and operational perspectives from national meteorological services. The session emphasizes scientific trustworthiness, evaluation, and physical consistency, and discusses how these requirements interact with high-performance computing workflows and model design. By bringing together experts from academia and operational forecasting, the minisymposium provides a forum to assess the state of the art and explore pathways toward reliable, convection-resolving data-driven forecasts, with relevance to a wide range of computational science domains.
Organizer(s): Oliver Fuhrer (MeteoSwiss, ETH Zurich), and Laure Raynaud (Météo-France)
Domains: Climate, Weather, and Earth Sciences, Physics, Computational Methods and Applied Mathematics
Emerging Paradigms for Performance Portability in Earth-System Modeling
The recent advancements in Earth System Models (ESMs) have enabled simulations at unprecedented resolutions (1-kilometre global and sub-kilometre local), pushing the boundaries of forecast quality. However, these simulations demand enormous compute resources, necessitating the largest supercomputers and cutting-edge hardware architectures from various vendors. This has created a significant challenge for ESMs to ensure performance portability, particularly given the legacy Fortran code bases. To address this issue, this mini-symposium brings together invited experts who will present diverse approaches to achieving performance portability, including the utilization of existing portability frameworks, the development of domain-specific languages and compilers, and share their hands-on experiences. By showcasing these innovative solutions, this mini-symposium aims to facilitate a comprehensive understanding of the current state-of-the-art and future directions in achieving performance portability for next-generation ESMs.
Organizer(s): Georgiana Mania (German Climate Computing Centre), and Xavier Lapillonne (MeteoSwiss)
Domains: Climate, Weather, and Earth Sciences, Computational Methods and Applied Mathematics
Ethical and Societal Considerations Ensuring Trust in Scientific Computing
As more academic disciplines and industry users are leveraging computational methods and scientific computing, the time it takes for HPC-enabled insights to impact society is decreasing rapidly. However, advances in high performance computing (HPC) and artificial intelligence (AI), as well as the increasing complexity of these fields, raise concerns about ensuring trust (with)in science. As a community, we recognise the temptation to exploit ever-growing data volumes and observe new computational methods in action, even when the associated biases and side effects are not fully understood. Co-design for HPC can and should, in addition to technical desiderata, also include broader ethical aspects of scientific computing. For instance, global computing and networking infrastructure have a significant environmental impact, both locally and globally. However, many studies usually only consider the operational life of a data centre, not the manufacturing process or end-of-life environmental impact. We wish to discuss how to incorporate ethics into all phases of scientific computing thus making potential trade-offs explicit and highlighting the decisions leading to them. We will feature 3 speakers and end an in open discussion to explore questions of trust in science and the intersection with ethical and societal concerns.
Organizer(s): Jakob Luettgau (INRIA), Nico Formanek (High-Performance Computing Center Stuttgart), and Jay Lofstead (Sandia National Laboratories)
Domains: Climate, Weather, and Earth Sciences, Applied Social Sciences and Humanities, Engineering, Life Sciences
Fostering an Exascale-Ready Distributed and Accelerated Dense Numerical Linear Algebra Library: The ExaNLA Experience
The rapid shift toward GPU-accelerated architectures in current and upcoming exascale supercomputers has exposed a critical gap in distributed dense numerical linear algebra software, particularly within the European HPC ecosystem. The deployment of JUPITER, Europe’s first exascale booster, underscores the absence of a mature NLA library capable of fully exploiting such massively accelerated systems, creating a significant barrier for both scientific and industrial applications that rely on large-scale linear algebra. To address this challenge, an international group of experts launched the ExaNLA initiative in 2025 as an open, community-driven effort to define and guide the development of a next-generation NLA library. ExaNLA brings together specialists in numerical analysis, high-performance computing, and software engineering to shape a shared technical vision, define core functionalities, and promote best practices for scalability, interoperability, resilience, and performance portability. Its activities are organized through focused working groups that explore algorithms, programming models, benchmarking strategies, and community support. This minisymposium serves as a platform to present current progress from these working groups and to discuss the future roadmap of the ExaNLA library through invited talks and a panel discussion, fostering broad engagement and collective planning for the initiative’s next steps.
Organizer(s): Edoardo Di Napoli (Forschungszentrum Jülich)
Domains: Chemistry and Materials, Climate, Weather, and Earth Sciences, Computational Methods and Applied Mathematics
Foundation Model for Weather and Climate
In recent years weather forecasting and to a lower extent climate modeling have been undergoing a revolution driven by the emergence of machine learning-based models. After having been successfully developed and used in the field of Large Language Model, the foundation model concept is being applied to machine learning-based weather-forecasting. The goal is to capture rich, multi-scale representations of the Earth system across space and time by training on diverse datasets. These models can then be applied to a wide range of tasks, much like traditional equation-based Earth system models. The first large foundation models are now emerging, and their potential applications are actively being explored. The talks in this minisymposium will define the concept of foundation models for weather and climate, discuss both their development and applications, and provide a comprehensive overview of the current state of the field.
Organizer(s): Xavier Lapillonne (MeteoSwiss), Ilaria Luise (ECMWF), and Sebastian Schemm (University of Cambridge)
Domains: Climate, Weather, and Earth Sciences
From cradle to grave: Influencing the environmental impact of HPC through Co-Design
Whilst HPC plays a crucial role in modern science and engineering, it is well accepted that there is work to be done in balancing the societal benefit with environmental impact. However when considering this balance, and even trying to understand the environmental impact in isolation, there are many variables that must be considered. Many current sustainability activities focus on one or two areas, but to have the greatest understanding and ultimate impact one must adopt a “full lifetime” approach, taking a view that tracks the environmental impact from cradle (the supply chain and manufacture) all the way to grave (responsible disposal and/or recycling). The aim of this minisymposium is to provide a holistic view across the area, exploring different approaches to considering and tackling these issues across vendors, researchers and HPC centres. We aim to engage the audience in a discussion around the challenges associated with sustainability and to identify areas that gain less attention but are of great importance.
Organizer(s): Nick Brown (EPCC), and Sylvain Laizet (Imperial College London)
Domains: Climate, Weather, and Earth Sciences
From Drug Design to Environmental Impact: The Digital Materials Frontier
This minisymposium explores computational and experimental strategies to understand the biological and health impacts of chemical exposures and drug–biomolecule interactions, with particular emphasis on mechanisms underlying metabolic and signaling dysregulation. It brings together researchers working at the intersection of computational chemistry, biophysics, pharmacology, and molecular modeling to highlight cross-disciplinary advances and identify opportunities for integrative, digitally driven innovation. A central focus is the study of per- and polyfluoroalkyl substances (PFAS), which represent a global environmental crisis due to their extreme persistence and mobility. Beyond environmental contamination, PFAS are increasingly recognized as endocrine-disrupting chemicals that interfere with metabolic pathways and interact with nuclear and metabolic receptors implicated in diabetes and related disorders. The minisymposium also addresses advances in computational pharmacology and multiscale biomolecular dynamics, including molecular binding models for drug design and virtual screening, as well as quantitative prediction of binding free energies and kinetics. Talks will highlight multiscale frameworks combining coarse-grained models with enhanced-sampling techniques to capture long time-scale processes in large biosystems, with particular attention to GPCRs, transporters, kinases, and membrane receptors.
Organizer(s): Miroslava A. Nedyalkova (University of Fribourg)
Domains: Chemistry and Materials
From Physics to Data – and Back: Trustworthy Machine Learning Potentials for Materials Design
Machine learning interatomic potentials (MLIPs) are transforming electronic-structure modeling by delivering near–quantum mechanical accuracy at greatly reduced cost. As these models increasingly surrogate explicit electronic-structure calculations, especially for large, complex, or heterogeneous systems, the central challenge becomes establishing trust when direct quantum-mechanical validation is no longer feasible. Balancing data-driven flexibility with physical consistency is essential: symmetry-preserving and physics-informed architectures enhance reliability, while more flexible models demand rigorous verification to ensure they learn the correct physical relationships. Exascale computing reshapes this landscape by enabling large-scale data generation, systematic cross-validation, and interrogation of MLIP behavior under extreme conditions. At the same time, the complexity of training and deploying these models highlights the need for reproducibility, explainability, and robust uncertainty quantification. Emerging approaches aim to embed trust checks – such as enforcing conservation laws, equivariance, and stability-directly into training rather than relying solely on post hoc validation. This minisymposium will gather researchers from chemistry, materials science, physics, and computer science to discuss strategies for developing trusted MLIPs. Topics include physics-informed representations, equivariant architectures, long-range and non-conservative interactions, uncertainty estimation, and automated verification workflows. The session aims to chart pathways toward reliable, scalable, and transparent MLIP frameworks for molecular and materials simulation.
Organizer(s): Michał Sanocki (Technical University of Munich), Ian Störmer (Technical University of Munich), Philip Robin Loche (EPFL), and Julija Zavadlav Koller (Technical University of Munich)
Domains: Chemistry and Materials, Physics, Computational Methods and Applied Mathematics
Hardware–Software Co-Design for Trustworthy Digital Twins in Fusion Energy Research
This minisymposium explores how integrated co-design strategies strengthen confidence in computational models for fusion energy. It examines how GPU-accelerated architectures enhance the reliability and reproducibility of digital twins for plasma physics. Fusion energy research combines complex multi-physics phenomena with AI-driven models, requiring convergence between hardware innovation, software optimization, and scientific validation. The session brings together experts from academia, industry, and major European programmes to highlight co-design efforts that balance simulation scalability with scientific integrity. Via the contributions of EUROfusion partners (SCITAS and CINECA) and the technology partner NVIDIA, this minisymposium will discuss GPU-enabled advances in plasma modelling, uncertainty quantification, and digital twin fidelity. The discussion emphasizes verification, validation, and reproducibility as foundational to scientific trust in high-performance fusion research. The session aims to demonstrate concrete co-design pathways from hardware to scientific validation, establish a reference architecture for trustworthy digital twins applicable beyond fusion, foster collaboration among HPC engineers, domain scientists, and AI experts, and produce actionable guidelines enhancing reproducibility and transparency in co-designed HPC applications.
Organizer(s): Gilles Fourestey (EPFL), and Filippo Spiga (NVIDIA Corporation)
Domains: Engineering, Physics, Computational Methods and Applied Mathematics
HPC and the Health Sciences: Co-Design for Trust, Robustness, and Communication
HPC-powered AI applications are increasingly accurate and robust, but challenges remain when translating these capabilities into real-world prevention and treatment routines. Central to this issue is the notion of trust, which is essential to all stakeholders in the health care complex: patients, providers, management, and governance. Trustworthy AI systems use transparent reasoning processes, are explainable, accountable, robust, fair, honest, privacy-preserving, and amenable to human goals. In the context of health, all of these aspects are potential blockers to future adoption. In this minisymposium, we will bring in experts from AI model development, health systems analysis, and the clinical translation of AI-integrated cancer treatments.
Organizer(s): Justin M. Wozniak (Argonne National Laboratory, University of Chicago), Thomas Brettin (Argonne National Laboratory, University of Chicago), and Eric Stahlberg (MD Anderson)
Domains: Applied Social Sciences and Humanities, Life Sciences, Computational Methods and Applied Mathematics
HPCs for Society: How Processing Power is Leveraged for Trust and Transparency
High-Performance Computing (HPC) is increasingly central to the deployment of trustworthy, transparent, and accountable AI systems in critical societal domains. This session brings together three complementary contributions that demonstrate how advanced computational infrastructures can strengthen public trust across transportation, energy, and digital communication systems. The first presentation introduces an HPC-enabled intelligent toll management framework that integrates real-time computer vision and blockchain technologies to ensure transparent, auditable, and tamper-resistant transactions in public infrastructure. The second contribution presents HP2C-DT, a multi-tier digital twin architecture for renewable-dominated power systems, where HPC supports large-scale simulations, AI training, and probabilistic scenario analysis to enhance grid resilience and decision reliability. The third paper explores distributed sentiment analysis of large-scale social media data, leveraging HPC-inspired parallel processing and benchmarking methodologies to improve reproducibility, scalability, and confidence in AI-driven societal insights. Together, these works illustrate a unifying theme: processing power is not merely a tool for speed or scale, but a foundational enabler of transparency, auditability, robustness, and reproducibility. By embedding AI within secure, distributed, and computationally rigorous frameworks, HPC technologies play a critical role in building systems that society can trust—supporting fair governance, resilient energy transitions, and accountable digital analytics.
Organizer(s): Tobias Hodel (University of Bern)
Domains: Climate, Weather, and Earth Sciences, Applied Social Sciences and Humanities, Engineering, Computational Methods and Applied Mathematics
HPSF: Building the Future of High Performance Software
As HPC hardware architectures rapidly complexify, the demand for portable, vendor-neutral, and high-quality software has never been more critical. Join us for a deep dive into the High Performance Software Foundation (HPSF), a Linux Foundation neutral hub for open source, vendor-neutral, high-performance software dedicated to fostering a “virtuous cycle” of project quality, community growth, and long-term software sustainability. This minisymposium explores how HPSF tackles this challenge by bridging the gap between hardware vendors and application users through a collaborative software stack co-design approach. Attendees will gain insights into: * the HPSF and its missions: how open governance and third-party evaluations build societal trust and ensure reproducibility in the HPC ecosystem. * Institutional Perspectives: Why leading academic, national lab, and industrial members are aligning their strategies with this neutral hub. * Project Feedback: First-hand accounts from developers on how global collaboration impacts software life cycles and functionality. * Interactive Roadmap: A panel discussion on future directions to scale software sustainability for a broader community. Whether you are a developer, researcher, or stakeholder, discover how this global community leverages diversity to solve the most pressing challenges in HPC software today.
Organizer(s): Julien Bigot (Maison de la Simulation; CEA, CNRS, Université Paris-Saclay, UVSQ), Todd Gamblin (Lawrence Livermore National Laboratory), and Emily Bourne (EPFL)
Domains: Applied Social Sciences and Humanities, Computational Methods and Applied Mathematics
Impactful Scientific Visualization
Data visualization is a powerful means of exploring data and communicating complex information to diverse audiences, both expert and non-expert. Although widely used by scientists and researchers, it often remains confined to self-exploration and peer communication within specialized fields. Moreover, the use of data visualization for communication purposes is often overlooked. However, advanced visualization tools and techniques can be leveraged across disciplines to disseminate scientific knowledge, reaching other researchers, stakeholders, and the broader public more effectively. The use of advanced visualization tools (such as VTK and Paraview, ViSIT, or Blender) and render engines (such as NVIDIA’s Barney and Blender Cycles), and the application of information visualization techniques can improve the quality of scientific visualizations. This minisymposium addresses this topic by presenting different visualization workflows aimed towards the production of impactful visuals from scientific data, in particular HPC simulation data. The four sessions span a wide range of scientific applications in order to show how common workflows might work independently of specific domains, and can address general visualization challenges applicable across domains: Cosmological simulations, CFD, geophysics, and the visualization of ML-based analysis.
Organizer(s): Guillermo Marin (Barcelona Supercomputing Center, Universitat Autònoma de Barcelona), and Guillaume Houzeaux (Barcelona Supercomputing Center)
Domains: Applied Social Sciences and Humanities, Computational Methods and Applied Mathematics
Intelligent Modeling for Sustainable Materials Design: Integrating Physics and Data Across Scales
The transition to a sustainable society critically depends on the discovery of new materials with improved efficiency, durability, and reduced environmental footprint. Achieving this requires transformative advances in the way materials are conceived, enabling rational design paradigms where atomistic modeling and data-driven methods guide synthesis and characterization towards target properties. In-silico approaches can streamline this process, but computer-aided materials design remains a complex multiscale challenge involving phenomena across several spatiotemporal scales. Classical multiscale modeling connects the atomistic resolution at the nanoscale with the macroscopic performance of the final product by sequentially upscaling the fundamental quantities that control material behavior [1]. More recently, innovative strategies combining data mining of synthesis and characterization protocols (both in-silico and analytical) with machine learning regression models have emerged as powerful tools to optimize the synthesis of diverse materials classes [2]. References [1] M. Andersson et al. A general, microkinetic model for dissolution of simple silicate and aluminosilicate minerals and glasses as a function of ph and temperature. Chemical Geology, 2025. [2] J. Guo and P. Schwaller. Directly optimizing for synthesizability in generative molecular design using retrosynthesis models. Chemical Science, 2025.
Organizer(s): Mattia Turchi (Empa), and Ivan Lunati (Empa)
Domains: Chemistry and Materials, Computational Methods and Applied Mathematics
Julia for HPC: Enabling Co-Design in Scientific Workflows
The fifth instalment of the Julia for HPC PASC minisymposium explores how the Julia language enables co-design to shape scientific workflows that can adapt to rapidly evolving computing architectures. As scientific models grow more complex and computing moves toward exascale, building trust in HPC software and results becomes essential. Such trust relies on correctness, reproducibility, and reliable performance across diverse platforms. This minisymposium highlights how Julia and its ecosystem support these goals by fostering close co-design between scientific applications, HPC tools, and emerging hardware and AI technologies. Julia’s single-language approach combines ease of use with high performance, allowing scientists and HPC experts to collaboratively develop, test, and optimise code without separating prototyping from production. A central theme is portability across architectures: Julia enables a single codebase to target CPUs, GPUs, and novel accelerators, as demonstrated by applications such as Oceananigans.jl and emerging TPU support via Reactant.jl, helping prepare scientific software for future HPC systems. Expert speakers will discuss how Julia enables portable, large-scale Earth system simulations, integrates modern AI compiler technologies, and supports differentiable modelling. The minisymposium targets both experienced Julia users and non-Julia users interested in reproducible and reliable HPC workflows.
Organizer(s): Ludovic Räss (University of Lausanne, ETH Zurich), Samuel Omlin (ETH Zurich / CSCS), and Michael Schlottke-Lakemper (University of Augsburg)
Domains: Climate, Weather, and Earth Sciences, Physics, Computational Methods and Applied Mathematics
Latest Developments in First Principle-Based HPC Simulations for Magnetic Fusion
Nuclear fusion is considered a credible complement to renewable energies in the effort towards reaching sustainable development goals. The currently most advanced and promising fusion reactor devices are the tokamak and the stellarator, which are based on the concept of magnetic confinement. Due to the high cost of building such machines as well as the complexity of the processes involved, comprehensive numerical simulations with a substantial HPC component are indispensable to progress our understanding of the underlying physics. This minisymposium is dedicated to the development and application of first-principle kinetic and gyrokinetic codes. As these models are computing a 5D or 6D distribution function, the codes will need very large computational resources so that special attention needs to be taken to optimize as well the algorithms as their implementation. The four talks will be dedicated to performance and optimization of the CGYRO gyrokinetic code for multiscale plasma turbulence simulation, to the development of the new exascale semi-Lagrangian gyrokinetic code GYSELA-X++ with first benchmarks and physics results, to the development of a new geometric Particle-In-Cell code GEMPICX based on the AMReX framework and to global gyrokinetic simulation with the flux-tube code Stella.
Organizer(s): Eric Sonnendrücker (Max Planck Institute for Plasma Physics, Technical University of Munich), Stephan Brunner (EPFL), and Florian Hindenlang (Max Planck Institute for Plasma Physics)
Domains: Physics, Computational Methods and Applied Mathematics
Nanomaterials in Aquatic Environments: Modeling, Risks, and Earth-System Interactions
Nanomaterials are increasingly present in rivers, lakes, groundwater, and through atmospheric deposition, raising new scientific questions that span multiple spatial and temporal scales. While engineered and incidental nanomaterials can provide benefits through advanced sensing, catalysis, and environmental remediation, they may also pose risks to aquatic ecosystems, water resources, and biogeochemical cycles. Understanding these dual roles requires computational and experimental approaches capable of linking molecular-scale interactions to hydrological and Earth-system dynamics. This minisymposium brings together researchers from environmental science, computational chemistry, and materials science to discuss emerging methods for characterizing the behavior, fate, and impacts of nanomaterials in aquatic environments. A central focus is the development of multiscale modeling frameworks that integrate chemical reactivity, surface processes, aggregation and transformation pathways. . At the nanoscale, material properties such as surface chemistry, dissolution behavior, and environmental aging critically influence mobility, persistence, and ecological effects.
Organizer(s): Miroslava Nedyalkova (University of Geneva)
Domains: Climate, Weather, and Earth Sciences
Next-Gen Scientific Workflows: Co-Designing for HPC, Cloud, and Beyond
The increasing complexity and heterogeneity of modern scientific applications require advanced approaches to orchestrating and optimizing computational workflows across high-performance computing (HPC) and cloud environments. Scientific domains such as materials science, earth system modelling, and life sciences increasingly rely on complex, data- and compute-intensive workflows that span heterogeneous infrastructures and software stacks. This workshop aims to bring together researchers, software developers, and infrastructure providers to explore workflow management and system co-design in the context of real-world applications. The minisymposium cover three use cases, each representing a distinct scientific domain: materials design and discovery, earth science, and computational life sciences. Each session will showcase domain-specific challenges in workflow definition, execution, and optimization across distributed resources, illustrating how workflow tools and HPC/cloud integration can accelerate scientific innovation. A concluding panel discussion will gather experts in HPC architecture, workflow runtime systems, and scientific software engineering to reflect on cross-domain lessons, best practices, and the future of workflow co-design at exascale. The workshop will foster interdisciplinary exchange between domain scientists and technology developers, promoting collaboration toward the next generation of interoperable, efficient, and sustainable workflow solutions for scientific discovery.
Organizer(s): Fabio Affinito (CINECA), Nur Aiman Fadel (ETH Zurich / CSCS), and Filippo Spiga (NVIDIA Inc.)
Domains: Chemistry and Materials, Climate, Weather, and Earth Sciences, Life Sciences, Computational Methods and Applied Mathematics
Open(ing) Development
In recent years, openness through the availability of source code has increased substantially. This trend has been driven by strong initiatives from funding agencies and scientific journals seeking to enhance reproducibility and transparency. In this minisymposium, we aim to discuss both the benefits and challenges of approaches that go beyond mere code accessibility to enable genuine community participation in model development. Communities engaged in Open Development must take into account several key aspects, including governance, communication, and the cultivation of a welcoming and inclusive culture. Equally important are the practical considerations—such as building and maintaining the community, developing comprehensive documentation, and establishing processes that ensure confidence in the quality of contributions, for example through review and testing practices.
Organizer(s): Jan Frederik Engels (German Climate Computing Centre)
Domains: Climate, Weather, and Earth Sciences, Applied Social Sciences and Humanities
Performance through Co-Design: Toward Trustworthy, Reproducible, and Scalable Exascale Workflows
Exascale systems expand the frontier for simulation-driven discovery, data-centric analytics, and operational digital twins. However, scientific impact depends on more than peak performance. It requires co-design that aligns architectures, system software, algorithms, workflows, and data pathways with domain objectives and scientific validity constraints. This minisymposium examines workflow-centric co-design as a key practical strategy for delivering HPC results that are fast, interpretable, reproducible, and dependable at scale. Topics include context-aware performance measurement, I/O and data movement as first-class design concerns, automated provenance capture and FAIR artifact publication, and the resilience and predictability requirements of digital twin deployments, and others. By connecting time-to-solution with trust-to-solution, the minisymposium aims to surface reusable design patterns and open challenges that must be addressed to translate exascale capability into validated, reusable, and widely trusted science.
Organizer(s): Ali Mohammed (HPE), Florina Ciorba (University of Basel), Marta Garcia (Barcelona Supercomputing Center), Utz-Uwe Haus (HPE), and Sarah Neuwirth (Johannes Gutenberg University Mainz)
Domains: Climate, Weather, and Earth Sciences, Computational Methods and Applied Mathematics
Performance-Portable High-Order CFD for Digital Twins on Heterogeneous Exascale Architectures
The emergence of Digital Twins and large-scale computational design is reshaping high-performance scientific computing, demanding scalable, trustworthy, and hardware-extensible solutions. This minisymposium brings together advances in high-order computational fluid dynamics (CFD) and compiler technologies that enable performance portability across heterogeneous CPU–GPU architectures. The three presentations highlight complementary strategies—software architecture, scalable optimization frameworks, and compiler-driven kernel generation—for building next-generation, performance-portable CFD and design tools for heterogeneous exascale systems. The minisymposium will conclude with a podium discussion on future directions in this field.
Organizer(s): Dominik Obrist (University of Bern)
Domains: Engineering, Computational Methods and Applied Mathematics
Responsible AI Systems at Scale for Life Sciences and Society
This Chair’s minisymposium comprised of independent yet complimentary submissions brings together three perspectives on Responsible AI systems at scale: systems that are scientifically rigorous and societally trustworthy. The first speaker sets the stage by introducing how decisions are influenced by model predictions and how researchers use deep learning combined with Explainable Artificial Intelligence (XAI) techniques to enable transparent, realtime interpretability at scale with HPC. The second speaker sets addresses challenges in AI system design and evaluation, highlighting how benchmarking practices utilizing narrowly defined datasets may lead to misleading performance metrics, while also failing to make useful generalizations necessary for advancement in discovery science. The third speaker shares a European AI Factory for life sciences that at its core incorporates an AI-optimized supercomputing node, federated dataspace, and a trusted AI model validation sandbox. Finally, a moderated panel discussion will encourage further reflection on the challenges associated with designing, developing, and evaluating the efficacy of Responsible AI systems at scale.
Organizer(s): Elaine M. Raybourn (Consortium for the Advancement of Scientific Software)
Domains: Life Sciences, Computational Methods and Applied Mathematics
Serving Inference: Leveraging HPCs in the Age of Generative AI
Generative AI is reshaping how scientific outputs are produced, yet most widely used tools are operated by commercial providers whose practices around processing, generating, and storing user inputs are often unclear. This lack of transparency raises serious concerns for research organizations handling sensitive or regulated data and complicates the responsible use of even non-regulated scientific content. At the same time, many commercially served models are closed-source and subject to frequent, opaque updates, limiting reproducibility and undermining alignment with FAIR principles. As open-weight and open-source models proliferate, HPC infrastructures are emerging as a promising alternative for hosting and providing controlled access to AI within research environments. However, most HPC systems were not designed for continuous, GPU-based inference services, creating technical and operational challenges spanning deployment, scheduling, reliability, user access, and governance. Consequently, institutions are developing ad hoc solutions with limited opportunities to exchange patterns and lessons learned. This minisymposium convenes a panel of institutions actively building such capabilities to compare approaches and discuss best practices, minimal viable solutions, and ideal configurations for serving generative AI on HPC. To broaden participation, we will use a short questionnaire to structure contributions across four dimensions: technical setup, usage policies, documentation practices, and monitoring/oversight mechanisms.
Organizer(s): Tobias Hodel (University of Bern, Switzerland), and Sukanya Nath (University of Bern)
Domains: Chemistry and Materials, Climate, Weather, and Earth Sciences, Applied Social Sciences and Humanities, Engineering, Life Sciences, Physics, Computational Methods and Applied Mathematics
Single and Reduced Precision for Weather and Climate Models
High-resolution weather and climate models are rapidly approaching the long-standing goal of simulating one year per day at 1 km global resolution, thanks to GPU-accelerated pre-Exascale and Exascale HPC systems. Achieving this performance sustainably, however, requires more than traditional code optimization – it demands HPC co-design, where scientific developers, numerical analysts, and computational experts collaborate to rethink algorithms for modern hardware. One promising strategy is the use of single- or reduced-precision arithmetic. Lower-precision computations improve memory bandwidth, cache efficiency, and energy usage, and enable exploitation of specialized GPU units optimized for fast, low-precision operations. At the same time, reducing precision raises important questions about scientific trust, numerical stability, and reproducibility, particularly in chaotic, multi-scale Earth system models. This minisymposium will bring together experts from climate science, numerical modeling, and HPC to share experiences, challenges, and best practices in adopting lower-precision computations. Topics include algorithmic reformulation, precision-aware software design, verification and validation strategies, and precision analysis tools. While focused on weather and climate models, the lessons learned are broadly applicable across computational science, offering guidance for building performant, trustworthy, and future-ready models on next-generation HPC platforms.
Organizer(s): Balthasar Reuter (ECMWF), and Claudia Frauen (DKRZ)
Domains: Climate, Weather, and Earth Sciences
Toward FAIR and Reproducible Dark Matter Science: Co-Design of Data Infrastructure, AI Workflows, and Community Datasets
Dark matter constitutes approximately 85% of the universe’s matter, yet its nature remains elusive. Direct detection experiments, though globally deployed, have historically generated data locked within custom formats and non-reproducible software stacks — limiting interdisciplinary analysis and innovation. As AI and machine learning become integral to scientific workflows, ensuring data are FAIR and accompanied by rigorous provenance is essential for reproducibility and trust. This minisymposium uses dark matter science as a concrete case study in co-designing data infrastructure and AI-ready workflows. Centered on the National Science Data Fabric (NSDF) and the SuperCDMS collaboration, three talks address this challenge from complementary angles: rethinking data formats and computing using DELight as a testbed for future experiments; establishing metadata standards that make AI-driven analyses reproducible and auditable; and building an open community dataset by converting proprietary detector data into accessible, machine-learning-ready formats. Together, these case studies show how federated, inclusive data ecosystems broaden participation and strengthen confidence in scientific results — directly aligned with the PASC26 theme, “Building Trust in Science through HPC Co-Design.”
Organizer(s): Michela Taufer (University of Tennessee), Amy Roberts (University of Colorado Denver), and Belina von Krosigk (Kirchhoff-Institute for Physics (KIP))
Domains: Chemistry and Materials, Computational Methods and Applied Mathematics
What the FORTRAN? – Drowning in Waves Edition
Fortran, the primary programming language underpinning many operational weather and climate codes, was built around the fundamental principle that performance optimisation is left to the compiler. However, to fully utilise modern HPC architectures and GPUs, additional programming paradigms as well as invasive code changes are often needed. While high-level DSLs or C-based abstraction layers like Kokkos provide a disruptive path to performance portability, for Fortran such abstractions are lacking. This leads to the fundamental question “What the actual FORTRAN can we do to achieve performance portability for complex physical models?” In this minisymposium we continue this discussion by looking at cross-architecture portability of ecWAM, ECMWF’s operational wave model. ecWAM is a key component of ECMWF’s Integrated Forecasting System (IFS), and is also used operationally in all the IFS-derived Destination Earth digital twins. We highlight the complexities and challenges that its diverse range of computing patterns pose, and together with industrial and academic partners, discuss different strategies towards a maintainable cross-platform adaptation strategy that runs efficiently across different HPC architectures. As ecWAM shares many similarities with other scientifically state-of-the-art third generation wave models, this minisymposium potentially offers a wide-ranging impact across the weather and climate communities.
Organizer(s): Ahmad Nawab (ECMWF), Michael Lange (ECMWF), and Balthasar Reuter (ECMWF)
Domains: Climate, Weather, and Earth Sciences

