Over 200 minisymposia will be held at WCCM 2024. They are listed below in 19 different tracks.
Click on each track to see the list of related minisymposia.
0100 Honorary Minisymposia
Professor Patrick Selvadurai was an academic luminary whose mark on the fields of continuum mechanics, theoretical geomechanics, and applied mathematics has resonated across the world. His unwavering dedication to education, research, and his enduring passion for knowledge dissemination stand as an exemplar for us all.
This mini-symposium will provide an opportunity to honor his journey by bringing together scholars, researchers, and practitioners from various domains of computational mechanics, who have directly or indirectly benefited or taken inspiration from Prof. Selvadurai’s work. The symposium seeks to celebrate and continue his legacy by focusing on the multidisciplinary aspects that defined his distinguished career.
The key areas of focus are: 1) Continuum Mechanics and Geomechanics: explore advancements in theoretical and computational geomechanics, showcasing the evolution of concepts pioneered by Prof. Selvadurai in soil-structure interaction, offshore structures, and environmental geomechanics. 2) Applied Mathematics in Engineering: highlight the impact of Prof. Selvadurai's contributions to applied mathematics in solving complex engineering problems, fostering discussions on mathematical modeling, simulations, and innovative computational techniques. 3) Interdisciplinary Collaborations: emphasize the significance of collaborations across engineering, mathematics, and other disciplines, reflecting Prof. Selvadurai's ability to bridge gaps and forge connections. 4) Educational Excellence: discuss innovative approaches in education and mentoring, inspired by Prof. Selvadurai's commitment to imparting knowledge with passion and dedication. 5) Continuing Philanthropic Endeavors: share plans and initiatives for the foundation set up in Prof. Selvadurai's memory, aimed at realizing his wish to support underprivileged students in pursuing education.
The mini-symposium will comprise presentations and discussions, providing a platform for thought-provoking exchanges and networking opportunities. We especially encourage the participation of former students of Prof. Selvadurai, who now hold academic or industry positions in Canada or abroad.
Professor Patrick Selvadurai's legacy is not limited to his numerous awards, publications, and leadership roles in academia. It transcends disciplines and generations, leaving an enduring impact on the way we perceive, analyze, and solve engineering challenges. This mini-symposium aims to ensure that his legacy lives on by fostering innovation, collaboration, and academic excellence.
In homage to Prof. Selvadurai's life, achievements, and contributions, we cordially invite computational mechanics researchers, engineers, educators, and students to join us in this mini-symposium. Let us gather in Vancouver in July 2024, exactly a year after his passing, to celebrate his legacy, explore the frontiers of computational mechanics, and strive to perpetuate the spirit of inquiry and multidisciplinary excellence that he embodied.
This minisymposium is being organized to honor Prof. Yannis Kallinderis on the occasion of his 60th birthday and his contributions to various areas of computational mechanics including Unstructured grid-based CFD, hybrid mesh generation and adaptation, computational aeroacoustics, high-performance computing, and parallel supercomputing. Prof. Kallinderis completed his PhD at MIT in the Department of Aeronautics and Astronautics, following his undergraduate studies at the National Technical University of Athens (NTUA). After leaving MIT, he started on his career at the ASE/EM department at the University of Texas at Austin in 1989 and was promoted to full chaired professor in 1997. During his tenure at UT, he led the Advanced Computation Engineering (ACE) Lab, overseeing the graduation of several notable individuals, such as Dr. Tommy Minyard at TACC of UT-Austin, Dr. Karl Schulz at Oden Institute, UT-Austin, Dr. Christos Kavouklis at LLNL, Prof. Kengo Nakajima at the University of Tokyo, Japan, and Prof. H.T. Ahn at the University of Ulsan, South Korea. His time at UT was marked by achievements, including the NSF Young Investigator award and the AIAA Lawrence Sperry Award. Presently, he serves as a professor at the University of Patras, Greece.
The minisymposium's focus revolves around Dr. Kallinderis’ research expertise; however, it encompasses broader subjects, namely:
Unstructured grid-based methods for CFD
High-order solution algorithms for compressible and incompressible flows
Hybrid mesh generation and adaptation
Computational aeroacoustics
Fluid-Structure interaction
Parallel supercomputing
Domain decomposition
Efficient Iterative Solvers for CFD
Professor JN Reddy has made seminal contributions to the broad field of computational mechanics including areas such as the finite element method, higher-order plate and shell theories, solid and structural mechanics, variational methods, mechanics of fibrous and laminated composites, functionally graded materials, fracture mechanics, plasticity, biomechanics, classical and non-Newtonian fluid mechanics, applied functional analysis, etc. His pioneering contributions have laid the foundations for novel mathematical models, non-classical and non-local theories, novel computational methods and design of novel materials for many emerging areas in science and engineering. The focus of this minisymposium is to celebrate Professor Reddy’s contributions to computational mechanics by bringing together researchers who have worked in areas ranging from foundational computational mechanics to a variety of applications in mechanics and materials.
This minisymposium will include contributions in areas such as computational methods for solids and fluids, non-local and non-classical continuum mechanics, non-classical and geometrically inspired mathematical models for mechanics of emerging problems including complex interfaces, mechano-biology, micro and nanomechanics, multiphysics and multiscale methods for emerging applications in science and engineering, computation-driven design and manufacturing of novel materials, data-driven methods in mechanics, etc.
Other contributions that build upon Professor JN Reddy’s foundational work are also welcome.
The mini-symposium is dedicated to the memory of Professor J. Tinsley Oden in order to celebrate his lifetime contributions to the field of computational mechanics, and more broadly, to Computational Sciences and Engineering. We invite Tinsley’s colleagues, students, friends, and the broader community, to honor the deep legacy of a visionary leader and celebrate his exemplary and legendary life. We anticipate contributions dealing with recent developments on broad topics in computational sciences and engineering and applied mathematics.
0200 Fracture, Damage and Failure Mechanics
This is a long-standing interdisciplinary Minisymposium, held at WCCM 8 (Venice, Italy, 2008), WCCM 9 (Sydney, Australia, 2010), WCCM 10 (Sao Paulo, Brazil, 2012), WCCM 11 (Barcelona, Spain, 2014), WCCM 12 (Seoul, South Korea, 2016), WCCM 13 (New York, USA, 2018), WCCM 14 (Paris, France/virtual edition, 2021) and WCCM 15 (Yokohama, Japan/ virtual edition). Its aim is to bring together specialists in mechanics and micromechanics of materials, applied mathematics, continuum mechanics, materials science, physics, biomechanics as well as mechanical, automotive, aerospace and medical engineering to discuss the latest developments and trends in computational analysis of relationships between the microstructural features of advanced engineering and natural materials and their local and global behaviour as well as its effect on performance of components and structures.
The topics of the Minisymposium include, but are not limited to, the following:
computational mechanics of advanced materials and structures;
effect of microstructure on properties and performance of advanced materials;
prediction of deformational behaviour and life-in-service of structures and components made of advanced materials;
computational models of biological and biomedical materials;
computational methods for analysis of modern visco-elastic composite and nanocomposites materials;
mechanics of composite materials with relaxation and phase transitions;
simulation of failure mechanisms and damage accumulation processes in advanced materials;
reliability analysis of microelectronic packages;
computational analysis of cutting of advanced materials;
numerical simulation of mechanical behaviour of materials in technological processes;
optimization problems in mechanics of advanced materials and structures.
The accurate and realistic modeling of inelastic behavior, damage and fracture processes of different materials is extensively discussed in the literature. Many continuum approaches have been presented and their efficiency has been demonstrated in different applications in a wide range of engineering fields. Due to intensive research activities during the last years, damage and fracture models have now reached a high level of quality. To be able to numerically analyze challenging engineering problems in an efficient and accurate manner, different modified and new techniques in computational mechanics as well as many robust numerical algorithms have been recently developed. A variety of continuum models and corresponding numerical aspects as well as current and future trends in computational damage and fracture mechanics will be discussed in the proposed mini-symposium.
Many industrial applications have led to the need for the analysis of material failure in challenging multiphysics scenarios such as hydrodynamic, thermal, chemical, or electric material stimulation. Such scenarios are observed in many natural solids and engineered products. This mini-symposium aims to provide a platform to discuss the recent advancements in computational fracture modeling within multiphysics loading conditions. The topics of interest include, but are not limited to the following:
Novel discretization techniques, e.g. phase-field and regularized damage models, extended/generalized finite element methods, cohesive zone methods, meshless and particle methods, peridynamics.
Constitutive and phenomenological modeling of fracture initiation and propagation, with multiphysics considerations
Mixed finite element formulations and stabilization techniques
Spatial and temporal multiscale techniques to represent various physical processes across scales
Computational homogenization and reduced order modeling
Numerical solution algorithms aimed to reduce the computational cost of non-linear multiphysics problems, including: staggered solution methods, and iterative methods
Data-driven approaches to constitutive and multiphysics modeling
Machine-learning powered models for efficient and accurate multiphysics and nonlinear mechanics modeling
Case studies focused on applications areas: fluid-structure interaction, hydraulic fracture, thermo-plasticity, electro- and chemo- mechanical couplings, corrosion and other environmental factors, rupture of soft materials.
This mini-symposium deals with the state-of-the-art computational modelling methods applied to fracture mechanics and failure analysis. Applications of computational methodologies, such as, FEM, X-FEM, G-FEM, S-FEM, BEM, IGA, Peridynamics and other advanced numerical techniques will be discussed in the mini- symposium to advance a comprehensive understanding of cutting-edge methodologies and simulations. Fields of interests span a wide range of areas, such as aerospace, automobile, naval architecture, nuclear power, mechanical/civil engineering, and other structural applications. Outcomes of both the applied and fundamental research are warmly welcome to enrich the knowledge exchange within the mini-symposium.
Catastrophic failure of materials and structures widely exists in nature and industries. Many disastrous factors, including earthquake, storm, flood, tsunami, explosion, etc., can cause sudden destruction or serious damage to large engineering structures, such as aerospace vehicles, high-pressure vessels, high dams, geotechnical structures, bridges, and high buildings, which may result in tremendous loss of properties and human lives. The modeling of material and structural failure has been a great challenge for the community of computational mechanics. This mini-symposium aims to bring together academics and practitioners who are interested in the computational mechanics related to the catastrophic failure of large engineering structures and present their ideas and potential solutions on emerging topics in theoretical and numerical modeling of catastrophic failure of materials and structures. Topics of interest include but not limited to:
Progress in the research on catastrophic destruction mechanism and failure analysis.
Identification and spatiotemporal distribution of various disaster factors
Mechanics theory and numerical methods for material and structural failure subjected to extreme conditions, e.g., FEM, XFEM, IGA, PF, PD, SPH, MPM, DEM, et al.
Numerical modeling of the effect of disastrous damage factors on structures
Multiscale methods for complex material behavior
Coupled multi-physics problems
Numerical techniques, discretization schemes and software implementation
Damage mechanism and failure of the large engineering structures
Indication and criterion of structure failure
Fracture and damage mechanics play a pivotal role in understanding material behavior and ensuring the reliability of engineering structures across various industries. Traditionally, fracture models have relied on theories involving simplifications and empirical correlations, limiting their accuracy. Furthermore, numerical simulations involving fracture models can be laborious, demanding an efficient discretization framework as well as a reliable (non-)linear solution strategy. These challenges restrict the applicability of existent fracture models when addressing complex real-world scenarios.
Concurrently, the rapid advancement of scientific machine-learning techniques and scientific computing presents unprecedented opportunities for enhancing the predictive capabilities of fracture models by improving simulation efficiency and accuracy. This mini-symposium aims to explore the synergistic integration of traditional fracture mechanics principles with cutting-edge research in scientific machine learning and scientific computing. The goal is to exchange ideas, methodologies, and challenges at the intersection of fracture mechanics, data science, high-performance computing (HPC), and large-scale solution strategies.
Contributions will cover topics including, but not limited to:
Scientific Machine-Learning: Leveraging novel scientific machine-learning techniques to investigate fracture and failure phenomena across multiple scales, as well as to quantify the uncertainties, enable model discovery, and allow for anomaly detection and damage identification using sensor data and non-destructive evaluation techniques.
Scientific Computing: Leveraging scientific computing tools to elevate the efficiency and scalability of fracture simulations. This involves the development of novel, efficient, and scalable strategies for solving both linear and nonlinear systems in fracture mechanics. Additionally, we are keen on exploring the integration of HPC techniques to boost the performance of the fracture simulation frameworks.
Over the last 25 years or so, variational phase-field models of fracture have established themselves as robust, efficient, and versatile tools.
This mini-symposium serves as a platform for discussing recent advances and applications of phase-field models of fracture. We welcome contributions on topics including but not limited to extensions of the current models to a broad range of materials and loading, multiphysics and coupled problems. We aim at striking a balance between analysis, numerical issues, and case studies.
Cementitious materials form the backbone of civil engineering infrastructure. Understanding their fracture, damage, and failure mechanisms is vital, especially with the evolving demands of modern design and longevity. With the push for sustainable and resilient infrastructure, understanding cementitious materials at their core becomes paramount. This symposium aims to provide a comprehensive yet concise overview of the present and future of these materials.
The following topics are welcome.
Basics of Fracture Mechanics:
Delve into stress, strain, and crack propagation dynamics. Explore Griffith's theory and the practical techniques to measure fracture toughness in cement-based systems.
Damage Mechanics:
Differentiate between damage and fracture. Address the evolution of microcracks, influence of material heterogeneity, and the role of additives like aggregates and fibers in the damage process.
Multi-Scale Modeling:
Transition from micro to macro perspectives in damage mechanics. Highlight the application of advanced modeling techniques such as FEM and DEM, ensuring material inhomogeneities are incorporated.
Environmental Impact:
Discuss the effects of long-term loading, environmental conditions like freeze-thaw cycles, and chemical attacks. Introduce strategies for mitigation.
Fiber-Reinforced Concrete (FRC):
Understand the mechanics behind fiber reinforcement. Explore types of fibers (steel, polymer, glass) and FRC performance under varied loads.
Self-Healing Concrete:
Discover the promising world of self-healing mechanisms, from autogenous to bacterial methods, and their implications for durability and resilience.
Non-Destructive Testing (NDT):
Highlight the significance of early damage detection. Introduce state-of-the-art NDT techniques and their integration with predictive maintenance models.
The Future of Cementitious Materials:
Touch upon smart monitoring solutions, sustainable binders, and the increasing role of geopolymers in reducing environmental impact.
In the civil and military, land, naval and aerospace transportation sectors as well as in the building and energy sectors, crash, collision, blast and impact are typical loading cases to be accounted for when designing or verifying engineering structures regarding accidental overloading or terrorist attack induced potential failure.
From the computational perspective, the challenge is to solve space and time multi-scale, multi-phase and multi-physic initial-boundary value problems, and accordingly to develop or adapt
methods of multi-media interaction or equivalent loading condition,
advanced rate and temperature dependent models applicable to high strain rate or/and high pressure,
methods of space and time discretization, including FEM, FVM, DEM, SPH and their combination,
numerical simulation involving crash, collision, blast or impact.
This mini-symposium aims at providing a forum for discussing new scientific and industrial challenges and developments in the field of computational mechanics in impact and blast engineering.
The phase-field approach is a very powerful technique to model and simulate complex fracture phenomena under various loading conditions, also in multi field settings and across the scales. Due to its flexibility, this methodology has gained wide interest in the engineering and applied mathematics communities especially in the past decade. Recently, the phase-field approach has been extended to model fatigue failure in the low and high cyclic regime.
This mini-symposium provides a forum for the discussion and exchange of ideas related to new advances and applications of the phase-field approach to fracture and fatigue in engineering. It welcomes contributions on phase-field modeling of fracture including brittle, cohesive and ductile fracture in solid and structural mechanics. Research results on basic aspects of phase-field formulations and of their numerical implementation, experimental validation as well as extensions to novel and/or more complex settings and relevant applications are all welcome.
Fracture and damage mechanisms that lead to failure of structural materials are complex processes and pose challenges, in many aspects. For instance, models for bridging the various length-scale phenomena, predictive models with parameters based on physically inspired quantities, validation by the experimental observations, and overcoming computational obstacles are goals shared by many researchers. Given the complexity and breadth of the scientific expertise necessary, we believe that it is crucial for researchers to be informed and inspired by the community. We propose to bring those active in experiment, theory, and computation together and to promote interdisciplinary collaboration.
The organizers from Los Alamos National Laboratory (LANL) would like to invite the presentations on ductile and brittle damage modeling, experiments, and computational simulations. The current status and progress on the work being conducted at LANL will be share, and we will facilitate the communications between the various teams. We would like to extend the scope of this mini-symposium to include the innovative research from a larger scientific community.
More specifically, we will solicit the presentations on the following topics:
Ductile damage model
Brittle damage model, microscopic and on continuum level
Damage nucleation and evolution
Shear localization/shear banding
Mesoscopic damage experiments
Fracture experiments
Fracture modeling
Phase field modeling of fracture
Molecular dynamics simulations
Multi-scale methods
Multi-rate (quasistatic, dynamic, and shock-loading)
0300 Advanced Discretization Techniques
Isogeometric Analysis (IGA) has been originally introduced and developed by T.J.R. Hughes, J.A. Cottrell, and Y . Bazilevs, in 2005, to generalize and improve finite element analysis in the area of geometry modeling and representation. However, in the course of IGA development, it was found that isogeometric methods not only improve the geometry modeling within analysis, but also appear to be preferable to standard finite elements in many applications on the basis of per-degree-of-freedom accuracy. Non-Uniform Rational B-Splines (NURBS) were used as a first basis function technology within IGA. Nowadays, a well-established mathematical theory and successful applications to solid, fluid, and multiphysics problems render NURBS functions a genuine analysis technology, paving the way for the application of IGA to solve a number of problems of academic and industrial interest. Further fundamental topics of research within IGA include the analysis of trimmed NURBS, as well as the development, analysis, and testing of flexible local refinement technologies based, e.g., on T-Splines, hierarchical B-Splines, or locally-refined splines, in the framework of unstructured multipatch parameterizations. Moreover, an important issue regards the development of efficient strategies able to reduce matrix assembly and solver costs, in particular when higher-order approximations are employed. Aiming at reducing the computational cost still taking advantage of IGA geometrical flexibility and accuracy, isogeometric collocation schemes have attracted a good deal of attention and appear to be a viable alternative to standard Galerkin-based IGA. Another more than promising topic, deserving a special attention in the IGA context, is finally represented by structure-preserving discretizations. Along (and/or beyond) these research lines, the purpose of this symposium is to gather experts in Computational Mechanics with interest in the field of IGA with the aim of contributing to further advance its state of the art.
Meshfree, particle, and peridynamic methods offer a new class of numerical methods that play an increasingly significant role in the study of challenging engineering and scientific problems. New and exciting developments of these methods often go beyond the classical theories, incorporate more profound physical mechanisms, and become the exclusive numerical tools in addressing the computational challenges that were once difficult or impossible to solve by conventional methods.
The goal of this minisymposium is to bring together experts working on these methods, share research results, and identify the emergent needs towards more rapid progress in advancing the important fields of meshfree, particle, and peridynamic methods. Topics of interest for this minisymposium include, but are not limited to the following:
Recent advances in meshfree, particle, and peridynamic methods, and their coupling with other computational methods such as IGA, material point method, and finite element method
Immersed approaches for non-body-fitted discretizations
Enrichment of basis functions for non-smooth approximations
Integration of physics-based and data-enabled approaches
Enhancement of meshfree, particle, and peridynamic methods by machine learning algorithms
Strong form collocation methods
Stabilization for under-integrated Galerkin methods
Methods for coupling multiple physics and/or multiple scales
Parallel computation, solvers, and large-scale simulations
Recent advances for challenging industrial applications: modeling extreme loading events, additive manufacturing, and mitigating disasters
Methods enabling a rapid design-to-analysis workflow
The minisymposium aims at gathering researchers who develop and implement novel discretization techniques that extend the domain of classic finite element approaches and make use of general polygonal/polyhedral meshes, with particular focus on the Virtual Element Method. These technologies also include continuous and discontinuous Galerkin methods on polytopal meshes, structure-preserving mimetic discretizations, hybrid high-order methods, to name a few.
In the last decade, a sustained development of the virtual element method (VEM) as a new approximation technology has taken place. It has been applied to solid and fluid mechanics problems. VEM meshes with convex and concave elements offer greater flexibility in mesh design and allow efficient strategies for adaptivity. In the context of computational mechanics problems involving internal interface and moving discontinuities, such as in the simulation of layered and fractured materials, the versatility of these methods allows for taming the geometric complexity by providing robustness for rough heterogeneities and mesh distortions. Due to its versatility, the approach is evolving and is finding applications in engineering and computational mathematics. It is constantly extended to new fields.
Engineering applications are on:
elasticity for small and inelastic deformations,
plasticity across the scales,
fracture mechanics in two and three dimensions,
homogenization techniques,
plate problems- contact mechanics,
coupled and multi-scale problems,
General polyhedral meshing including cut-cell techniques and hybrid meshing
Mesh adaptivity, coarsening, and aggregation strategies
Image-based modeling including computed tomography
Rapid design-to-simulation workflows
Topology and shape optimization
Space-time formulations
The minisymposium welcomes contributions from the theoretical, computational and application viewpoints, with the hope that it can serve as a forum for the exchange of new ideas.
Immersed Boundary Methods (IBM) have been attracting strongly increased attention during the past ten to fifteen years. Their central principle is to extend a domain of computation to a larger one, typically with a simple shape, which is easy to mesh. On this extended domain a finite element type computation is performed, distinguishing between regions interior and exterior to the original domain. Under the denotation ‘fictitious domain’ or ‘embedded domain methods’ the central principle has been followed already since the 1960ies. The recent new interest results from innovative and efficient algorithmic developments, from mathematical analysis showing optimal convergence despite the presence of cut elements, the possibility to efficiently link these methods to various types of geometric models and from many new engineering applications. Many variational versions of Immersed Boundary Methods have been developed, like CutFEM, the Finite Cell Method, Unfitted Finite Elements, the Shifted Boundary Method, Phi-FEM, Immersogeometric Analysis, just to name a few.
This mini-symposium will focus on Immersed Boundary Methods of variational type, and the various aspects that make them successful in addressing complex problems, namely: mathematical analysis, a priori and a posteriori error estimation and adaptivity, advanced numerical integration procedures, data structures and parallel scaling of algorithms, integration with CAD models and non-standard geometric representations, and applications. The scope of this mini-symposium is to be as broad as possible in terms of applications, such as, but not limited to: problems in solid mechanics, heat transfer, CFD, fluid/structure interaction, and any other types of domain coupling. Topics will also include computational homogenization techniques and the connection between Immersed Boundary Methods and meta-algorithms, such as the ones used in Uncertainty Quantification, Reduced Order Models, Machine Learning and Artificial Intelligence, Direct and Inverse Problems, and Topology Optimization, just to name a few.
Coupled problems appear in many important industrial applications, such as fluid-structure interactions, magnetohydrodynamics, multiphase flows, etc. The study of real life-applications involves the interplay of various complex physical processes, necessitating the development of state-of-the-art numerical methods that are accurate and robust, but also computationally efficient for large-scale simulations.
In this mini-symposium, we aim to provide a platform for researchers developing novel techniques for coupled problems in incompressible fluid dynamics with an emphasis on either theory or practice. Numerical methods of interest include robust, efficient, structure-preserving, and high-order methods using various spatial discretization techniques (e.g. finite element, finite volume, spectral element).
Both continuum mechanics and kinetic (particle) models have an elegant description in terms of geometric mechanics formulations, such as variational, Hamiltonian, metriplectic, GENERIC and port-Lagrangian/Hamiltonian. Based on understanding a system’s configuration space and its corresponding symmetries, these geometric descriptions enable accurate representation of both reversible (thermodynamic energy-conserving) and irreversible (thermodynamic entropy-generating) dynamics, along with the correct interconnection/coupling between systems. This can be done in both Lagrangian and Eulerian coordinates, with corresponding conversions between the two representations. Additional insight is gained by working in terms of exterior calculus (differential forms) rather than the usual vector or tensor calculus. Specifically, exterior calculus leads to a clear mathematical representation of the physical field quantities and a rich insight into the properties of their associated functional spaces; along with a clean separation between topological and metric parts of the equations. This is of fundamental importance when constructing numerical discretization models.
Building on this, it is possible to develop numerical models that emulate the fundamental features of these geometric mechanics formulations, leading to many desirable properties. Such methods are known as structure-preserving or mimetic, and are based on discrete versions of exterior calculus that enable the construction of a discrete version of a geometric mechanics formulation. These structure-preserving discretizations display remarkable benefits over more naïve discretizations, including freedom from spurious/unphysical numerical modes, consistent energetics, controlled dissipation of enstrophy or thermodynamic entropy, and stable coupling between subsystems. By constructing discrete methods which mimic the structure of their continuous counterparts, we obtain numerical schemes which are more stable, more interpretable, and more respectful of known dynamical invariants, leading to improved accuracy and physical realism in resulting simulations.
This minisymposium brings together researchers studying and implementing these ideas at both the continuous and discrete levels across a wide range of continuum mechanical and kinetic models, including geophysical fluid dynamics, plasmas, compressible flow, and solid mechanics. We welcome contributions related to all areas of geometric mechanics and structure-preserving discretizations including (but not limited to) novel geometric mechanics formulations for both continuum mechanical and kinetic models and structure-preserving spatial, temporal and spatiotemporal discretizations such as variational integrators, compatible Galerkin methods (ex. finite element exterior calculus, mimetic Galerkin differences, compatible isogeometric methods), discrete exterior calculus, symplectic/Poisson/metriplectic time integrators, energy-conserving time integrators, and structure-preserving reduced order models.
In this minisymposium we seek to highlight challenging problems in computational solid mechanics that require rapid modeling building and mesh adaptivity for solution. We focus on finite element and other emerging discretization methods for large deformations and the accompanying inelasticity, localization, and failure. Discussion will center on Lagrangian descriptions and the necessary computational components to resolve, preserve, and evolve the fields that govern these processes. Prototypical material systems may include, but are not limited to, polymers, structural metals, and biomaterials.
Topics of interest:
Novel methods for discretization
Tetrahedral, hexahedral, and other 3D element technology
Local remeshing including topological changes and smoothing
Field recovery and mapping of internal variables
The goal of this mini-symposium is to promote discussion and facilitate the exchange of knowledge and expertise on the fundamentals and applications of particle-based numerical methods for solving a variety of multi-physics problems. Multi-physics applications involve the convergence of various physical processes in the scope of fluid mechanics, solid mechanics, heat and mass transfer, and beyond. The focus will be on particle-based methods both as discretization concept and as physical particles, including, but not limited to, distinguished methods such as smoothed particle hydrodynamics method (SPH), moving particle semi-implicit method (MPS), material point method (MPM), and discrete element method (DEM). The coupling of these methods with conventional numerical techniques is also in the scope of this mini-symposium. Join us in this enriching mini-symposium as we unravel the intricacies of particle-based methods, share insights, and foster connections at the crossroads of multi-physics exploration
The spatial discretization within the framework of the traditional finite element method is based on polynomial approximation and simplex elements. This approach simplifies the element formulations but introduces meshing burden on numerical analysis. The last few decades have seen the introduction of a large number of advanced discretization and approximation methods aiming at alleviating the intrinsic difficulties of the finite element methods.
Among the large number of methods in Computational Mechanics, a particular subclass of approaches can be classified as polytopal methods. Two examles are the scaled boundary finite element method (SBFEM) and the virtual element method (VEM). Within the application of solid mechanics, polytopal elements have seen applications in both linear and nonlinear regime in both two- and three-dimensions. The range of applications includes problems of mathematics and engineering and is constantly extended to new fields. Among these problems is elasticity for small and inelastic deformations, like elasto-plasticity, fracture mechanics in two and three dimensions. Extensions of these methods to problems of compressible and incompressible nonlinear elasticity and finite plasticity, are recent as well as applications to contact mechanics, coupled and multi-scale problems.
This mini-symposium aims at gathering researchers in the engineering and mathematics communities active in the VEM and SBFEM and other polytopal methods, and welcomes contributions both from the theoretical, applicative and computational point of view, and is intended as a fruitful moment of interdisciplinary exchange of ideas.
Iterative coupling algorithms and Enriched Finite Element Methods (e-FEMs) such as Generalized/eXtended FEM are two distinct but related methods that are often used to solve multiscale, fracture, moving interfaces, and other challenging problems in mechanics. e-FEMs have received increased attention and undergone substantial development during the last two decades. Recent focus has been placed on improving the method’s conditioning, and in the development of Interface- and Discontinuity-Enriched FEMs as alternative procedures for analyzing weak and strong discontinuities. The question of conditioning, robustness, and performance are common issues of e-FEMs and iterative coupling algorithms.
As these methods get more and more mature, a common challenge concerns their implementation in available software which is often difficult and time-consuming and, therefore, expensive. One strategy to address this issue is to non-intrusively couple commercial and research software and thus provide the end-user with simulation and modeling capabilities not available in any single software.
This mini-symposium aims to bring together engineers, mathematicians, computer scientists, and national laboratory and industrial researchers to discuss and exchange ideas on new developments, applications, and progresses in coupling algorithms and Enriched FEMs. While contributions to all aspects of these methods and their implementation are invited, topics of particular interest include:
verification and validation; accuracy, computational efficiency, convergence, and stability of e-FEMs and coupling algorithms.
new developments for immerse boundary or fictitious domain problems, flow and fluid-structure interaction, among others.
applications to industrial problems exhibiting multiscale phenomena, localized non-linearities such as fracture or damage, and non-linear material behavior.
acceleration techniques for coupling algorithms.
coupling algorithms for multi-physics and time-dependent problems.
High-order methods in computational fluid dynamics have been the subject of academic studies and industry interest for over two decades, due to their prospects of yielding high levels of accuracy at computational costs that are lower than traditional second-order methods. Many advances in high-order methods have already been made, on various fronts, including discretization, stability, solvers, mesh generation, error estimation, adaptation, and applications. However, these methods have still not yielded the proper combination of efficiency and robustness required for widespread use and adoption by industry, and hence the topic remains an important research interest.
The focus of this minisymposium is on theoretical advances in high-order numerical methods aimed at overcoming their challenges, as well as application demonstrations that stress the limits of high order and identify new challenges. Numerical methods in the scope of this minisymposium include finite volume, finite-difference, (weighted) essentially non-oscillatory, continuous/discontinuous finite element, spectral difference/volume methods, and other related discretizations. Relevant topics include, but are not restricted to, spatial discretization, time integration, shock capturing, mesh generation, error estimation, adaptivity, visualization, implementations on novel architectures, hybrid methods, scale-resolving simulations, magnetohydrodynamics, and innovative uses of machine learning methods. Of interest is also work in high-performance computing that is related to high-order methods, including GPU implementations and quantum computing.
The design of structure-preserving numerical methods for multiphysics systems has become an increasingly attractive research field in recent years. Structure-preserving schemes come with the promise of enhanced numerical stability and robustness. They can be viewed as an extension to coupled dissipative systems of conserving schemes which were previously developed in the context of conservative Hamiltonian systems with symmetry. The coupling of several fields makes the design of structure-preserving schemes particularly demanding. On the other hand, the interaction of different fields may cause numerical instabilities when applying standard discretization techniques. Structure-preserving methods have the potential to correctly reproduce coupling effects in the discrete setting and are thus less prone to numerical instabilities.
The space-time discretization of multiphysics systems is strongly affected by the way in which the underlying field equations are written, including the choice of variables. The structure of the underlying balance laws is built into specific descriptions such as GENERIC or the port-Hamiltonian formulation which thus might be of advantage for the design of structure-preserving schemes.
The present minisymposium aims at bringing together researchers from different fields dealing with the design of structure-preserving space-time discretization methods for multiphysics systems. Applications may focus on both dissipative solids as well as fluids. Specific applications may deal with, among others, large-strain thermo-elasticity, electro-thermo-elastodynamics, soft magnetoactive materials, polyconvex electro-mechanics, thermo-electro-viscoelasticity of dielectric elastomers, shallow-water flow problems, or complex fluids.
0400 Multiscale and Multiphysics Systems
Multiscale computational homogenization methods refer to a class of numerical homogenization techniques for determining the effective behavior of complex and highly heterogeneous materials, and for computing the response of structures composed of these materials. The main added value of computational homogenization consists in surpassing limitations of analytical approaches, e.g. incorporating realistic multi-phase morphologies and complex nonlinear material behavior.
This minisymposium focuses on the developments and applications of either multiscale computational homogenization methods, including all pending challenges in this area, or on modeling and simulation methods at the scale of heterogeneous microstructures with an implicit or explicit connection to another scale. Particular emphasis is given on complex models to incorporate specific phenomena at a given scale and related simulation challenges (complex morphologies, large models, lack of deterministic description of constituents, presence of interfaces…) and emergent behavior (effective behavior not described by individual constituents).
The topics covered include (but not limited to):
FE2 methods and alternatives (e.g. FExFFT, FExDiscrete Elements...);
Machine-learning/artificial intelligence techniques and surrogate modeling for multiscale analysis
Advanced algorithms for reduction of computational costs associated with multiscale algorithms (model reduction, parallel computing…)
Data-driven multi-scale mechanics
Numerical or virtual material testing across the scales;
Emergent behavior through upscaling
Scientific computing and large data in multiscale materials modeling
Coarse-graining of nano- and micromechanics
Numerical modeling of materials based on realistic microstructures, e.g. provided by high resolution 3D imaging techniques;
Computational homogenization of heterogeneous, linear, time-dependent and nonlinear heterogeneous materials, including material dynamics and metamaterials;
Heterogeneous materials with coupled multi-physics behavior (phase change, chemo-mechanics, nonlinear thermo-mechanics...), including extended homogenization schemes;
Multiscale damage modeling, capturing the transition from homogenization to localization;
Computational homogenization including size effects, higher-order gradients or lack of scale separation;
Numerical modelling of the macroscopic behavior of microstructures with complex interfaces, microcracking, instabilities or shear bands;
Integration of stochastic microscopic models and its multiscale treatment
The advent of advanced manufacturing and materials technologies now provides the capabilities to architect microstructured materials such as 3D printed lattice structures, fiber-reinforced or multiphase composites, foams, electro- or magneto-active polymers, etc. The mechanical and multifunctional behaviors of these metamaterials can be tailored to their specific engineering applications and are often highly nonlinear, anisotropic, inelastic, and multiphysical. Thus, classical constitutive models are typically not flexible enough to model their effective material behavior in multiscale and multiphysics simulations, while concurrent multiscale approaches are inherently computationally expensive and slow. Thus, in recent years, the formulation of constitutive models using highly flexible machine learning and surrogate modeling methods such as artificial neural networks and deep learning, Gaussian processes, radial basis functions, clustering methods, etc. has gained momentum. Nevertheless, many challenges remain to be addressed for machine learning-based material models, such as their accuracy, reliability, and physical soundness, their efficiency, the consideration of inelastic material behaviors, parametric dependencies or uncertainties, etc.
This minisymposium welcomes contributions on the state-of-the-art of machine learning methods for multiscale and multiphysics materials modeling. In particular, the areas of interest include, but are not limited to:
Material models based on feed-forward, deep, recurrent, convolutional, graph, and other types of neural networks, or Gaussian processes, radial basis functions, clustering methods, etc.
Models for elastic, as well as dissipative, inelastic (elasto-plastic, visco-elastic, etc.), and multiphysically coupled (electro, magneto, thermo, chemo, mechanical, etc.) material behaviors
Physics-guided/informed/augmented machine learning methods for thermodynamically consistent, physically, and mathematically sound material models
Consideration of parametric dependencies, uncertainties, adaptivity, error estimates, etc. in machine learning methods for material modeling
Efficient implementation and application of machine learning methods for multiscale and multiphysics simulations
The increase in computational resources has enabled an extremely enhanced resolution up to which we are capable of approximating physical phenomena. One of the most challenging task for a modeller is to choose what are the relevant (and irrelevant) parts of a phenomenon, and among these choices resides the choice of a characteristic scale. Thankfully, newer mathematical tools can help in developing upscaled models that capture the contribution of phenomena at much smaller scales, such as homogenization, mixed dimensional modeling, and also simply by scaling up the dimension of the computational model.
In this minisymposium we hope to foster the dialogue among communities using different tools for the mathematical modeling of multiphysics and multiscale phenomena, such as cardiovascular biomechanics, fracture mechanics and poroelasticity. Our interests are broad, meaning that we welcome contributions coming from:
Theory: mathematical homogenization, mixed dimensional systems, continuum mechanics.
Modeling: soft tissue, metabolism, geosciences.
Methodology: multiscale and multiphysics solvers, linear systems and preconditioners, operator splitting, partitioned schemes, numerical discretizations, software for scientific computing
Mixed-dimensional PDE systems arise when coupling unknown fields defined over domains of different topological dimension. They characterize a broad range of relevant problems in many scientific and engineering fields, such as fluid flows in fractured porous media, the design of very large floating sea structures, coupled cortex-cytoplasm dynamics in living cells. They can also be used to impose non-standard interface conditions on a lower-dimensional embedded subspace, e.g., through a Lagrange multiplier.
The aim of this minisymposium is to share and discuss the latest advancements, challenges and perspectives around the numerical approximation of mixed-dimensional PDEs, with a special interest in problems with moving boundaries and interfaces. Topics of interest range from modelling aspects, mathematical analysis, computer implementation and efficiency issues to innovative applications. We will address both parametric and immersed boundary approaches, including (but not limited to) arbitrary Lagrangian-Eulerian, Unfitted Finite Element Methods, Phase-Field or Virtual Element Methods.
The design of material at the microscale greatly impacts its mechanical response at different observation scales and under different loading conditions. Representative V olume Elements (RVEs) have successfully been used for homogenization and multiscale modeling of materials, particularly for linear and quasi-static loading conditions. In addition to such RVE-based approaches, this minisymposium seeks homogenization and multiscale methods that address dispersive response of elastodynamic metamaterials and other microstructured media, quasi-static and dynamic response of random media, and approaches that incorporate material failure at multiple scales. Broad topics of interest are (but not limited to):
Dynamic homogenization methods:
Field averaging, parameter retrieval, and effective (apparent) properties
Spatial dispersion and higher order methods, and Willis material
Effect of disorder and randomness
Novel unit cell designs for phononic crystals and mechanical metamaterials:
Use of nonlinear elasticity and friction, fluid-based designs, dynamic and adaptive design.
Parametric and topology optimization of unit cells and graded microstructure.
Seismic, acoustic and vibration, and blast and ballistic applications of dispersive materials.
Multi-fidelity approaches to design of microstructured media
Statistical homogenization and upscaling methods:
Statistical homogenization approaches that relate inherent material randomness to statistical variation of macroscopic quantities of interest both for quasi-static and dynamic regimes.
Effect of disorder on dynamic response of metamaterials.
Material damage and other nonlinear effects:
Failure of mechanical metamaterials and other dispersive materials under dynamic loading.
Homogenization and multiscale schemes incorporating material failure across scales.
Fragmentation problems and rate effects arising from material microstructural design and failure processes.
Numerical methods for aforementioned problems:
Volume-Element based multiscale methods.
Reduced order models.
Machine learning-based multiscale methods.
High performance computing aspects (parallel computing, high order methods, mesh adaptivity, and multi-resolution schemes).
This minisymposium provides communications and discussions on the recent developments and applications of novel multiscale computational and data-driven approaches for advanced materials and structures at a wide span of spatial and temporal scales.
Recent development and application of multiscale computational approaches have been focusing on the constitutive modeling and more comprehensive description of nonlinear deformation, physical failure, damage evolution, and environmental aging of advanced materials and structures (including nanoscale aggregates, mesoscale structures and segregations, and macroscale laminates). In particular, reactive nature of materials’ behaviors is described at extreme scale to establish structure-to-property relationships of materials. On the one hand, the data-driven computational approach provides an innovative paradigm shift in the computational engineering with a rapid growth of the data-driven methodology (including proper orthogonal decomposition, deep learning, and machine learning). The recent achievements of the data-driven computational approach has been successfully showing the possibility to improve the performance of the multiscale computational modeling and simulations.
Topics of interests will be (but not limited to)
Multiscale modeling and simulations of advanced materials and structures
Mechanical and environmental degradation of materials and structures (including aging and damage)
Reactive simulation on durability and stability of multifunctional structures
Multiphysics constitutive modeling and homogenization of composites
Reduced-order modeling in multiscale modeling and simulations
Advanced molecular dynamics simulations (including coarse graining modeling)
Data-driven multiscale modeling and simulation approach
Multiscale fracture mechanics modeling of advanced materials and structures
This mini-symposium focuses on the integration of computational mechanics with machine learning (ML) for the digital twinning of intelligent systems, including but not limited to manned and unmanned air/water vehicles, spanning from soft and biomimetic robots, autonomous underwater vehicles and flying drones, among others. Comparison of new efficient, robust, and accurate methods, critical assessment and benchmarking with the existing data-driven and ML techniques and novel applications are welcome. The aim of this mini-symposium is to provide a platform for investigators to disseminate and discuss data-driven modeling and ML methods for multiphysics prediction and optimization of aerospace, land-based, and marine vehicles. Novel physics-based data-driven and ML technologies for active feedback control, real-time structural monitoring and multidisciplinary design optimization are desired. New ideas and contributions of software implementation details and benchmark problems are encouraged.
In the relentless pursuit of energy efficiency, sustainability, and the electrification of various sectors, energy storage systems, particularly batteries, have emerged as the linchpin of modern technological advancements. The pivotal role of batteries in powering electric vehicles, renewable energy integration, portable electronics, and grid stabilization underscores the imperative for comprehensive understanding and accurate modeling of their complicated multiphysics behavior. As we stand at the precipice of a transformative era in energy storage, the need for sophisticated, physics-based models and advanced computational techniques at various length scales to predict, optimize, and control battery performance has never been more pressing.
Our objective is to explore and promote the development of models that go beyond empirical approximations, delving into the intricate electrochemical processes within batteries. These models should encapsulate the physical and chemical phenomena governing energy storage and release, accounting for factors such as electrode kinetics, electrolyte behavior, thermal effects, and structural changes. Through this conference, we hope to facilitate knowledge exchange, collaborative research, and the dissemination of novel computational tools that enable accurate battery performance predictions, state-of-health monitoring, remaining useful life prediction, and the design of sustainable, long-lasting energy storage solutions.
This symposium seeks to provide a prominent platform for researchers, engineers, and industry experts to convene and deliberate on the latest developments, challenges, and breakthroughs in battery modeling and computational methodologies. With a keen emphasis on physics-based and data-driven approaches, this symposium aims to bridge the gap between fundamental electrochemical principles and practical battery applications. By fostering an interdisciplinary dialogue among experts from various domains, including materials science, chemistry, physics, electrical engineering, and computer science, we endeavor to accelerate the pace of innovation in battery technology. We cordially invite researchers, practitioners, and enthusiasts to participate actively in this stimulating intellectual exchange. Together, let us harness the power of physics-based battery modeling and computation to shape a more sustainable, efficient, and electrified world.
Energy storage devices and systems, from batteries, supercapacitors to fuel cells, have broad applications from electric vehicles, grids to consumer electronics. These systems are inherently multiphysics and multiscale. Technology advancement demands energy storage solutions with higher capacity, longer life, higher reliability and smarter management strategy. Designing such systems involves a trade-off among a large set of parameters. The topics of this symposium include, but are not limited to, the following:
Computational mechanics of materials and structures for batteries, supercapacitors, fuel cells and other systems.
Mutiphysics simulations of electrochemical systems.
Prediction of performance such as capacity and degradation.
Failure mechanisms such as fracture and damage in energy storage materials and devices.
Design and optimization of energy storage materials, devices and systems.
Numerical simulation techniques including machine learning approaches for energy related applications.
Computational particle-based solvers have emerged as powerful tools for conducting multiphysics and multiscale simulations, offering a comprehensive understanding of complex phenomena across scientific and engineering domains. These solvers consider particles as fundamental entities, each interacting with others and their environment, making them well-suited for addressing interconnected processes.
In the realm of multiphysics simulations, where different physical phenomena interact, particle-based solvers excel at capturing these intricate couplings. Unlike grid-based methods, which struggle with such interactions, particle-based approaches provide a more natural representation. For instance, these methods can simulate fluid dynamics, heat transfer, and electromagnetic interactions simultaneously, revealing insights into how these processes influence each other within a unified framework.
Furthermore, particle-based solvers play a crucial role in multiscale simulations. These methods allow adaptive refinement, dedicating computational resources to areas of interest while coarsening less critical regions. This adaptability enables simulations spanning several orders of magnitude, such as studying the behaviour of nanoparticles in fluid flows or the interaction between molecules in chemical reactions.
One illustrative application of particle-based solvers is in additive manufacturing (AM). AM involves intricate multiphysics phenomena, including heat transfer, fluid flow, and material behaviour. Particle-based solvers can simulate the entire AM process, capturing the deposition of material layer by layer. For instance, in metal powder-bed fusion AM, these solvers can model the interaction between laser energy and metal particles, predicting melting, solidification, and the resulting microstructure.
In AM simulations, particle-based methods address multiscale challenges. They can model the macroscopic workpiece while resolving microscale phenomena, such as grain growth and defects within individual layers. This capability is essential for predicting material properties and final part performance accurately.
Despite their strengths, particle-based solvers come with challenges. Achieving numerical stability, handling boundary conditions, and managing computational costs remain areas of active research. The growing complexity of simulations demands substantial computational resources, often requiring high-performance GPUs.
Recent advancements in parallel computing, algorithm optimization, and hardware capabilities have propelled particle-based solvers forward. Researchers are refining existing methods and developing new ones to address limitations and expand applications while coupling with other numerical techniques serve to further enhance their capabilities.
In conclusion, computational particle-based solvers are instrumental in multiphysics and multiscale simulations, providing insights into intricate phenomena. With applications like AM, these solvers showcase their ability to model complex processes and predict real-world outcomes accurately. As computational techniques continue to evolve, particle-based solvers are poised to drive scientific understanding and innovation across diverse fields.
Advances in material and component manufacture are accelerating the need for microstructure aware simulation capabilities. Disparity of length scales between material microstructures and engineering components continues to be a challenge - especially for additively manufactured materials. Microstructure heterogeneity at component scale is also a challenge. Relatedly, problems which exhibit a lack of length scale separation are quite common and go beyond microstructures - one example here is 3D printed lattice structures, where representive cell dimensions may be on the order of the structural component. In finite element simulations at component scale, the question of what properties to use, where to use them, and over what length scale, persists. This minisymposium is focused on the development of novel methods to tackle these challenges. While there is a focus on additively manufactured metals, polymer systems are welcome since many strategies may have overlap. Continuum scale mechanical, electrical and thermal properties are of interest.
We seek talks which describe methods of computational data analytics and/or work flows that address these challenges. Methods using FFT, RVE and spatial statistics to characterize length scales and properties for use in finite element simulations are of interest as are methods to translate inhomogeneous texture information from laboratory scale to component scale. We consider methods for identifying relevant length scales, homogenization, machine learning of constitutive models, and analytics applied to laboratory data (such as CT or EBSD) or simulated data; all with a focus towards facilitating component scale simulation reflecting important microstructural aspects.
Sandia National Laboratories is a multimission laboratory managed and operated by National Technology & Engineering Solutions of Sandia, LLC, a wholly owned subsidiary of Honeywell International Inc., for the U.S. Department of Energy’s National Nuclear Security Administration under contract DE-NA0003525.
0414 Multiscale Theory and Modeling of Advanced Nanocomposites, Xing-Quan Wang, Zhouzhou Pan and Denvid Lau
Nanocomposites have garnered considerable attention in both fundamental nanoscience research and practical applications due to outstanding mechanical, electrical, thermal, and other fantastic functions. In nanocomposites, the incorporation of micro- and nano-scale constituents has led to enhanced material performance while simultaneously increasing the complexity and diversity of the material composition. Consequently, the investigation and analysis of nanocomposites necessitate the consideration of multiple length scales to establish relationships between macroscopic properties and various hierarchical structures at the nanoscale and microscale levels. The multiscale approach facilitates the prediction of material performance and enables the optimization of composite material design. This minisymposium aims to highlight breakthroughs, progress, and challenges in the multiscale theory and modeling of nanocomposites.
Minisymposium topics include but not limited to the theoretical and simulation methods of multiscale analysis of:
Designing and optimizing structure and performance of nanocomposites;
Assembling advanced nanocomposites;
1D fiber/2D film/3D bulk nanocomposites based on nanoclay, carbon nanotube, graphene, MXene, boron nitride, etc.;
Multifunctional nanocomposites including mechanical, electrical, thermal, EMI shielding, etc.;
Smart and intelligent nanocomposites including self-healing, self-monitoring, adaptive, environmental response, etc.;
Nanocomposites used in electrical and energy devices, such as supercapacitors, batteries, etc.;
Nanocomposites for actuator, artificial muscle, sensor, etc.;
Light-weighted structural nanocomposites used in vehicle, aerospace, etc.
preCICE is an open-source coupling library for partitioned multi-physics and multi-scale simulations. It enables the efficient, robust, and parallel coupling of separate single-physics solvers. This includes, but is not restricted to fluid-structure interaction. preCICE treats these solvers as black-boxes and, thus, only minimally-invasive changes are necessary to prepare a solver for coupling. Ready-to-use adapters for well known open-source solvers, including OpenFOAM, SU2, CalculiX, FEniCS, and deal.II, are available, while the core library is included in the xSDK ecosystem. The software offers methods for equation coupling, fully parallel communication, data mapping, and time interpolation schemes. The minisymposium brings together users and developers of the software. It enables the exchange of users among themselves, which otherwise would not know much of each other. Furthermore, the developer team can get direct feedback from users, who they sometimes only know from forum conversations. Lastly, the software and its capabilities can be presented to others in a full and broad sense as not only the developers talk about their software, but also users report on experiences. Recent work focuses on extending preCICE towards multi-scale coupling, higher-order mapping, and applications other than fluid-structure interaction. For more information, please visit https://precice.org/.
In this minisymposium, we invite presentations devoted to space-time approaches. Space-time modeling is well-established since the late 1960s and has further developed since then. The holistic treatment of the spatial and temporal coordinates spanning one physical space in which processes evolve provides several benefits, both from the modeling and the numerical perspective. Space-time methods have been utilized in various applications, such as incompressible flow, solids and fluid-structure interaction, and recently even up to thermo-mechanical dissipative microstructure evolution, and other multi-physics problems. In this minisymposium, current state-of-the-art advancements in space-time methods will be presented. These include space-time modeling of fluids and solids, space-time discretizations, space-time model order reduction techniques, space-time variational approaches, solvers and error control. Both engineers and applied mathematicians are equally addressed to contribute to this minisymposium to create a platform for fruitful exchange of ideas and discussions of novel trends.
The chemically complex materials, namely, materials with multiple principle chemical components, are drawing increasing attentions in both academia and industries. Examples of emerging chemically complex materials include mixed ion perovskite, high entropy alloys, and van der Waals heterostructures, making them promising advanced materials with applications in advanced electronics, quantum computing, renewable energy, structural materials, or even superconductors. The advantage these materials possess is the wide tunability of material properties, or the potential of attaining multiple functionalities at the same time (e.g. conversion efficiency and life time in perovskite). The microstructure of these chemically complex materials plays a pivotal role in the material properties and performance; nevertheless, owing to the complex combinatorial chemical/permutation space spanned by chemically complex materials, comprehensive insights into their process-structure-property relationship remain a challeng task. In this mini-symposium, we would like to welcome researchers around the world to share and exchange their latest works utilizing computational techniques across both the spatial and temporal length scales to reveal the microstructures of chemically complex materials, and their impacts to the material properties for the structural material, energy storage, catalysis, and renewable energy applications.
The rapid growth of renewable energy sources and the increasing demand for high-performance energy storage systems have ushered in a new era of interdisciplinary research and development. To address the complex challenges associated with energy transition and storage, a holistic approach is required. This mini-symposium aims to bring together experts from various fields to explore innovative strategies and computational techniques for understanding and optimizing these intricate systems.
In recent years, layered energy transition and energy storage systems, such as solid-state batteries, structural batteries, water electrolyzers, or fuel cells, have gained significant attention. These systems involve complex interactions between multiple physical and chemical processes, including electro-chemical reactions, heat transfer, mechanical deformation, mass transport, damage and fatigue. Accurate modeling and simulation of these coupled phenomena are crucial for improving the efficiency, safety, and durability of energy conversion and storage devices.
This mini-symposium will provide a platform for researchers to discuss novel methodologies and present their latest findings in modeling and simulation of such electro-chemo-thermo-mechanical interactions. Topics of interest include, but are not limited to,
material modeling,
advanced numerical techniques,
multi-scale modeling, and
optimization (design and/or control)
for the relevant multi-physics problems that occur during assembly, service, aging and failure of systems for energy transition and storage.
References:
[1] Carlstedt, D., Runesson, K., Larsson, F., Jänicke, R., & Asp, L. E. (2023). Variationally consistent modeling of a sensor-actuator based on shape-morphing from electro-chemical–mechanical interactions. Journal of the Mechanics and Physics of Solids, 179, 105371.
[2] Kink, J., Ise, M., Bensmann, B., & Hanke-Rauschenbach, R. (2023). Modeling Mechanical Behavior of Membranes in Proton Exchange Membrane Water Electrolyzers. Journal of The Electrochemical Society, 170(5), 054507.
[3] Werner, M., Pandolfi, A., & Weinberg, K. (2021). A multi-field model for charging and discharging of lithium-ion battery electrodes. Continuum Mechanics and Thermodynamics, 33, 661-685.
Multi-scale and multi-material Topology Optimization (TO) is a driving force in the design of lightweight, extreme structures and materials. The multi-scale concept leverages scale effects, expanding the components’ performance beyond that of single-scale designs. For instance, recent works on de-homogenization TO achieved stiffness and stability performances matching the theoretical bounds [1]. Moreover, the use of TO for designing multi-material components and novel composites such as variable stiffness laminates, allows tailoring their response and use in multiphysics applications [2].
The foregoing TO methods have been recently infused by approaches that employ high-level representations of geometrical primitives, providing several advantages in the design of components, architected materials and structural assemblies. Feature mapping [3], and discrete object projection [4] techniques provide better geometrical control, allowing a straightforward implementation of certain manufacturing constraints, and facilitating a CAD interpretation of the optimized design. These methods are nowadays used for applications such as multi-component design, two-scale and multi-material truss lattice design, structures made of fiber-reinforced components, and optimal layout of inclusions in functional material, fulfilling non-trivial geometric restrictions.
This minisymposium aims at bringing together experts in the field, to discuss recent advances in the vast plethora of topology optimization methods used in multi-scale, multi-material, and multi-component TO.
Topics of the MS include, but are not limited to:
TO of multi-scale and/or multi-material structures, including those with nonlinear, dynamic, and transient responses.
TO of architected materials, including multi-scale lattices, woven materials, and others.
Homogenization and de-homogenization based TO.
Design of structures made of composite materials (e.g., fiber-reinforced polymers, reinforced concrete, etc.)
Integration of manufacturing requirements and process modelling within TO
Geometry representation/parametrization and linking of TO with CAD/CAE software
Industrial applications and case studies
REFERENCES
[1] Wu J. Sigmund O., Groen J. P. – Topology optimization of multiscale structures: a review. Structural and Multidisciplinary Optimization, (2021) 63: 1455-1480
[2] Bayat M., Zinovieva O., Ferrari F., et al. – Holistic computational design with additive manufacturing through topology optimization combined with multi-physics multi-scale material and processing modelling. Progress in Material Science, (2023) 138: 101129
[3] Wein F., Dunning P.D., Norato J.A. – A review of feature mapping methods for structural optimization. Structural and Multidisciplinary Optimization, (2020) 62: 1597-1638
[4] Guest, J.K. – Optimizing the layout of discrete objects in structures and materials: A projection-based topology optimization approach. Computer Methods in Applied Mechanics and Engineering, (2015) 283: 330-351
Coupled systems as they appear often in the form of e.g. fluid structure interaction or control of complex systems pose particular challenges already in solving the forward problem, as they often require the coupling of discretisations, solution algorithms, and software. These challenges are even larger if inverse problems like identification are tackled, or if one wants to perform an uncertainty quantification or optimisation for such a system, or if it is intended to design a control algorithm to achieve some desired optimal outcome. To reduce the computational burden, in such cases reduced order models (ROMs) or proxy models, sometimes combined with machine learning -- lately often in the form of deep neural networks -- are used. Due to the intended use in optimisation, uncertainty quantification, control, or deterministic or stochastic identification, such models are necessary parametric models, often involving large numbers respectively dimensions of parameters.
For coupled systems, the use of such ROMs is even more desirable, but they are often produced for each system component separately, and the problems of coupling then transfers to these ROMs. The questions which arise and should be addressed in this mini-symposium are then on how to produce such parametric reduced order models, and how to couple or combine models from different sub-domains to perform one of the afore mentioned tasks for the whole system. Of particular interest are new fast and computationally effective and accurate algorithms, as well as contributions to their formulation and analysis, computational procedures, understanding, error estimation.
The mini-symposium is to bring together researchers in these fields and offer a look at the problems alluded to above, contribute to the computational, physical, engineering and mathematical insight, and offer vistas and perspectives at the formulation, analysis, and computational solution.
0500 Biomechanics and Mechanobiology
This minisymposium aims to promote international exchange of new knowledge and development in all aspects of biomechanics of bio- and bio-inspired soft materials, from theoretical formulation and development, to numerical computation and simulation, and further to relevant experimental work and engineering application, especially featuring the most advanced biomechanics development of natural biomaterials and bio-inspired soft materials. It includes all classes of material properties (such as strength, elasticity, plasticity, toughness, impact strength, fatigue, fracture, and creep) and their structures, bioactivity, biocompatibility, biostability, self-assembly, and structural hierarchy. Of specific interest are the underlying physics and chemistry governing the functional elements of the soft materials, and these functions include (but are not limited to) structural, thermal, chemical, magnetic, or their interdisciplinary combination. Although the analytical and numerical analyses of these functional elements of soft materials constitute the main highlights of this mini-symposium, important computational studies of others aspects will also be welcome, such as imperfections in soft materials and their resulting limitations, novel processes for advanced soft materials synthesis, engineering of soft material properties, environmental consideration in soft material performance, etc.
Computational mechanics and numerical methods play an increasingly significant role in the study of biological systems at the organism, organ system, organ, tissue, cell, and molecular scales. New and exciting applications of computational mechanics go beyond the classical theories and incorporate biomechanical mechanisms inherent in biology such as adaptation, growth, remodelling, active (muscle) response, and inter- and intra-patient variables. Synergies among fundamental mechanical experiments, multi-modal imaging and image analyses, new mathematical models and computational methods enable studies of, e.g., microphysical (mechanobiological) cellular stimuli and response, structure-function relationships in tissues, organ and tissue integrity, disease initiation and progression, engineered tissue replacements, and surgical interventions.
The goal of this minsymposium is to promote cross-fertilization of ideas and collaborative experimental and numerical efforts towards more rapid progress in advancing the overall field of computational biomechanics. To this end, contributions considering the following topics are particularly welcome: coupled analyses of chemo-mechanical processes; methods coupling multiple scales and/or multiple physics; growth and remodelling of biological tissues; characterization and impact of inter- and intra-patient variability; applications with clinical impact or potential clinical impact; new constitutive models; mechanobiology and cellular mechanics; applications of medical images and image analyses in mechanics; mechanics of pathological processes; and experimental methods and computational inverse analyses towards model calibration.
Verified and validated computational models of hard and soft tissues, including bones, teeth, tendons, or arteries, have been shown to be a valuable clinical tool in many applications, including, but not restricted to, the prediction of fracture risk in femurs and vertebrae due to osteoporosis, the design of prophylactic surgeries in femurs with metastatic or benign tumors, the estimation of possible ruptures of aortic aneurysms, or the optimization of patient-specific implants, also based on 3D printing. This mini-symposium will focus on recent advances in the realistic mathematical and computational modeling of the aforementioned hierarchical biological systems, together with their translation into clinical practice, evidencing difficulties and opportunities concerning the in-vivo validation of computational biomechanics tools. This frame is thought to bring together computational biomechanics scientists coming from different avenues, so as to arrive at new perspectives on open challenging problems.
Computational modeling and simulation-based approaches in cardiovascular biomechanics and biomedicine have seen rapid progress in recent years. Computational approaches provide a non-invasive modality for understanding the underlying mechanics of cardiovascular diseases, as well as guiding device design and treatment planning. The future of computational cardiovascular biomechanics lies in patient-specific simulation of real disease events, enabling simulation assisted diagnostics, device design and deployment, and treatment planning decisions. The primary challenge here is that patient-specific phenomena involve the synergistic interplay of multiple underlying physical, mechanical, and chemical processes, coupled to each other across several spatial and temporal scales. Concurrently, the availability of high-resolution imaging and clinical data, and recent innovations in data-driven models and artificial intelligence, have enabled new avenues for advancing patient-specific predictive models of cardiovascular phenomena. Together, multiphysics and data-driven modeling have thus slowly emerged as the new frontier in high-fidelity modeling of cardiovascular systems, aiming to resolve physiological and pathological phenomena in real patient-specific scenarios. Advancements in this field calls for inter-disciplinary research efforts that go beyond current multiscale computational mechanics approaches in cardiovascular biomechanics. This minisymposium will bring together scientists working across various domains to provide a platform for discussing the state-of-the-art and future directions in multiphysics, multiscale, and data-driven modeling of cardiovascular systems. Fundamental and applied contributions from a wide range of topics focusing on theoretical and computational approaches for cardiovascular phenomena will be discussed. The term multiphysics in this context refers to coupled physical interactions including not only fundamental fluid and solid mechanics, but also transport phenomena, biological growth and remodeling, electrophysiology, biochemical interactions including drug delivery and other related aspects. Data-driven approaches include artificial intelligence, machine learning, data-augmented models, image analytics, uncertainty quantification, and related techniques. Topics include (but are not restricted to):
Coupled multiphysics models for cardiac mechanics.
Multiphysics and multiscale models for vascular biology and biomechanics – arterial and venous systems.
Patient-specific multiphysics modeling of cardiovascular diseases like stroke, aneurysm, thrombosis, atherosclerosis, embolisms.
Numerical methods and algorithms for multiphysics coupling – staggered and monolithic approaches; mesh-based, mesh-free, and particle-based methods.
Artificial intelligence and machine learning in models for cardiovascular phenomena.
Assimilation of experimental data into multiphysics models.
Integration of cardiovascular imaging into multiphysics models.
Applications in cardiovascular medical and surgical treatments for patients.
Applications in design, deployment, and operation of medical devices in vivo.
Thrombotic/embolic risk assessment for biomedical devices and mechanically assisted circulation.
Computational tools, specialized software, and databases for cardiovascular simulations.
Medicine relies on a broad range of imaging modalities to visualize, measure, and understand disease.
Concurrently, computational models of tissues are developed with higher fidelity due to improvements in medical imaging acquisition time, deep tissue imaging, and resolution across scales. New imaging modalities and novel applications of existing technologies have also enabled the characterization of tissue properties beyond geometry or microstructure. Imaging methods used in computational medicine include magnetic resonance imaging, computed tomography, ultrasound, optical coherence tomography, digital image correlation, multi-photon microscopy, and various other microscopy modalities. Imaging data thus provides the geometry indispensable for the generation of any realistic computational mechanics model. It also enables measurements of the changes in the geometry - elastic strains or permanent deformations - that occur during tissue development, regeneration, aging, and disease. In combination with other measurements such as forces, and an increasing understanding of the physics at different length scales, imaging data plays a fundamental role in the development of validated and predictive constitutive models for biological tissues at the cell, tissue, and organ levels.
In this minisymposium, we solicit contributions that describe advances in computational mechanics and data-driven modeling in medicine. Novel methods for models informed by or based on innovative use of imaging modalities across the scales are welcomed. This minisymposium would also like to highlight interdisciplinary efforts of basic and clinical scientists, biophysicists, engineers, and mathematicians that jointly address the most important challenges and trends in imaging-based modeling of biological phenomena, e.g.: neuron material transport, growth and remodeling mechanisms in myopia, keratoconus and glaucoma, skin growth, brain tissues, and cardiovascular systems.
Cancers are highly heterogeneous diseases that involve diverse biological mechanisms, interacting and evolving at various spatial and temporal scales. Multiple experimental, histopathological, clinical, and imaging methods provide a means to characterize the heterogeneous and multiscale nature of these diseases by providing a wealth of temporally and spatially resolved data on their development and response to therapies (e.g., cancer architecture, mechanics, and vascularity; cancer cell mobility and proliferation; drug transport and effects). These multimodal, multiscale datasets can be exploited to constrain biophysical models of cancer growth and treatment response both in preclinical and clinical settings. These models can then be leveraged to test hypotheses, produce individualized cancer forecasts to guide clinical decision-making, and, ultimately, to design optimized therapies. The overall goal of this minisymposium is to provide a forum to present and discuss recent developments in data-informed computational models and methods for predicting cancer growth and treatment response, with special focus on the following research areas: (i) biology-based mechanistic models of cancer growth and treatment in vitro and in vivo; (ii) computational methods for model initialization, parameterization, and patient-specific simulation; (iii) model-oriented, personalized optimization of treatment regimens; (iv) uncertainty quantification and model selection methods; (v) hybrid strategies combining machine learning and mechanistic modelling; and (vi) digital twins in clinical oncology.
Computational mechanics and numerical methods are powerful tools to assist early diagnosis of diseases and advance modern treatment strategies. However, the complexity of living active systems makes entirely new demands on mechanical models and numerical solution methods, cf. e.g. [1–3]. To allow for predictive simulations which are useful to the clinical community and can be implemented in daily clinical practice, it is essential to combine mechanics with biochemistry or electrophysiology through multi-physics modeling. These models can provide a gateway to bridge the scales from metabolic processes on the subcellular level to macroscopic mechanics, to incorporate the tissue response to mechanical stimuli through coupling strategies, and to intelligently integrate experimental data for model calibration and validation. Another important challenge is to not only consider individual processes independently, but to incorporate the interplay of different functional units in the context of a whole biological system.
This minisymposium focuses on novel approaches to master those challenges. We welcome highly interdisciplinary contributions bringing together the expertise of different fields such as mechanical modeling, numerics, data science, and clinical application. The goal of this minisymposium is to create valuable synergies between researchers working on different biological systems, potentially on different scales, to bring computational modeling one step closer to clinical practice.
REFERENCES
[1] Ricken, T., Werner, D., Holzhütter, H.G., König, M., Dahmen, U., and Dirsch, O., Biomech Model Mechanobiol, V ol. 14, 515–536, 2015.
[2] Röhrle, O., Sprenger, M., and Schmitt, S., Biomech Model Mechanobiol, V ol. 16, 743–762, 2017.
[3] Budday, S., Ovaert, T.C., Holzapfel, G.A., Steinmann, P., and Kuhl, E., Archives of Computational Methods in Engineering, V ol. 27, 1187–1230, 2020.
Numerical modeling is important for the understanding of many problems in biomechanics including, for example, the study of hemodynamics, the modeling of tissues in the human body, etc. Though numerical studies are noninvasive, they are often time-consuming, especially when we need to study and compare multiple scenarios, because of the complex geometry and the high computational complexity. In this mini symposium, we present some latest development of numerical methods, such as parallel domain decomposition methods, and their high-performance implementations for solving Newtonian and non-Newtonian fluid flows problems, linear and nonlinear elasticity problems, electrophysiology problems. Several classes biomechanical problems will be targeted including the cerebral artery, the coronary artery, the pulmonary artery, the abdominal aorta, and the heart.
Cardiac fibrosis is a form of structural remodeling connected to various cardiac pathologies and diseases. It impacts cardiac function directly and indirectly through various pathways. On the electrophysiological side, it leads to reduced conductivity and deranged cellular excitability, increasing the risk of atrial and ventricular arrhythmia; mechanically, it reduces contractility and increases matrix stiffness, impairing pump function; hemodynamically, it leads to reduced blood flow and increased thrombogenesis risk. Cardiac fibrosis is manifested across many spatial and temporal scales, and computational models can be immensely useful in exploring its complex, multiphysics pathophysiological consequences.
This minisymposium aims to bring together computational researchers working on projects focusing on the impact of cardiac fibrosis. It allows for the exchange of recent advances achieved through modeling and simulation across multiple scales (e.g., myofilament dynamics, electromechanical feedback mechanisms, cell-matrix interaction, alterations in tissue-level mechanical properties, organ-level multiphysics modeling). Use of previously published or new experimental or clinical data in these models is expected to yield additional insights and guide computational research toward the most interesting new questions, spanning multiple physiological scales. For example, the impact of cell structure or calcium dynamics can be studied at the cell and tissue levels, while areas of interest at the organ level include personalization of models or incorporation of data from novel clinical imaging modalities. Multiscale modeling focus areas include challenges related to numerical stability and model validation, verification, and uncertainty quantification. This is a highly multidisciplinary field, requiring expertise on many levels. A critical aim of this minisymposium is to promote the unification of these disparate and often siloed research teams.
Advances in tissue engineering and in biomedical imaging in the past decade have hugely increased the need for data-driven computational models of musculoskeletal tissues such as bone, teeth, cartilage, tendons, ligaments, as well as engineered, biofabricated tissues. Computational modelling paired with imaging data can provide new insights into quantities of interest not easily measureable in the lab, such as transport properties, mechanics, and dynamic information. Musculoskeletal tissues are regulated by mechanistic and mechanobiological processes that depend on morphology and hierarchical structure, as well as material properties. Shape adaptation, remodelling, damage repair, mineralization, and signal propagation, are critical dynamic biological processes that enable these musculoskeletal tissues to resist failure during our lifetime, despite being under-engineered compared to static engineering structures sustaining repeated loadings. The growth, adaptation, and repair of these tissues depend crucially on mechanosensation and mechanotransduction - the ability to sense mechanical states and to generate biological responses. Several computational models, experiments, and tissue engineering scaffolds were developed in recent years to shed light on these mechano-regulated processes.
The objective of this minisymposium is to bring together the expertise of established and emerging researchers investigating the complex mechanobiological interplays at stake in musculoskeletal tissues, including mineralised tissues (bone, teeth) and soft tissues (cartilage, tendon and ligament), particularly in the context of clinical and biomedical applications such as orthopaedic prostheses, prosthetic dentistry, implantology, bioscaffold design, and tendon repair. The minisymposium aims to share and transcend the different computational approaches (e.g., phenomenological models, cell-based models, multiscale approaches), the results that can be obtained with them, and the biological insights that they can provide, both in terms of data analysis and interpretation, and in terms of predicting time evolutions in health and disease. A specific focus will be on the integration of novel experimental data in computer models, collected e.g. using high-resolution micro-computed tomography, electron microscopy, polarisation-dependent second harmonic generation, DualBeam (FIB/SEM) systems, bone chamber models, bioreactors, and scaffolds.
This mini-symposium focuses on computational models for the mechanobiology and biomechanics of cells, vesicles and biomembranes, and their structural constituents such as lipid bilayers, protein filaments, cytoplasm, cytoskeleton, organelles and nuclei. Due to the diversity of these components and due to the different length and time scales involved, a wide range of computational approaches can be considered. Examples are continuum models - like finite element and meshfree methods - structural models - like shell or beam models - and molecular models - like coarse-grained or all-atom molecular dynamics. Also machine learning techniques can be expected to play an increasingly important role in the study of cells, vesicles and biomembranes. This session aims at bringing together researchers working on these topics and providing them with a forum for discussion.
Possible topics to be discussed in this symposium are:
deformation of cells (as well as vesicles, biomembranes and their constituents)
interaction, contact and adhesion of cells
tethering and budding of cells
interaction of cells with their extracellular matrix
cell motility
diffusion into cells and across membranes
protein binding to membranes
virus penetration of cells
cell division and rupture
mechanosensing and mechanotransduction of cells
multiscale modeling of cells
cellular signalling pathways
medical imaging of cells
0600 Materials by Design
This minisymposium aims to bring together researchers to share new understanding on design and mechanics of composites and structures with multiple functions. The next generation of materials and structures will require unprecedented mechanical properties and multifunctionality. Composites and structures that possess multiple functions or the ability to sense and respond to various environmental conditions or stimuli will have a wide range of application prospects in aerospace, naval, vehicle engineering, flexible electronics, robotics and other fields. Issues related to the optimization design, properties tunning, multifunctional mechanisms, mechanical methods, and applications of composite materials and structures are welcome to be discussed in the conference. Structural functions include mechanical properties like strength, stiffness, fracture toughness, and damping, while non-structural functions include electrical and/or thermal conductivity, sensing and actuation, energy harvesting/storage, self-healing capability, electromagnetic interference (EMI) shielding, recyclability and biodegradability.
This mini-symposium aims to provide a platform for knowledge exchange and stimulate discussions on the computational design of mechanical metamaterials with tailored properties. As metamaterials continue to gain popularity for applications in several fields, including aerospace engineering, sustainable manufacturing, healthcare and biomedical engineering, conventional trial-and-error design methods have proven to be inadequate and inefficient for handling their vast design-property space. The increasing complexity required by modern applications demands advanced computational modeling and systematic inverse design techniques that go beyond trial-and-error approaches based on physical intuition.
We welcome contributions that explore current methodological research enabling the computational design of mechanical metamaterials. Topics of interest include both novel approaches to design and novel applications of metamaterials including, but not limited to:
Advanced computational modeling (e.g. nonlinear constitutive behavior, fracture, instabilities, wave propagation)
Micromechanics and multiscale modeling
Topology optimization
Data- and machine learning-driven modeling and generative design
Novel design techniques for applications of metamaterials in healthcare and space
This symposium aims to expand international cooperation, and promote research efforts in all aspects of the discipline of Computational Mechanics of Soft Matter and Machines. It will feature the frontiers of mathematical modeling, simulation measurement and applications of soft matter and machines, including hydrogels, ionic gels, polymers, dielectric elastomers, shape memory polymer and aerogels, and soft robots (soft machines), all of which are highly relevant to this symposium. Of special interest are the mechanisms governing the structural, mechanical, chemical, electrical, optical, thermal properties, or a combination of any of these, especially analytical and computational studies on their intrinsic properties and potential applications. The symposium will also cover novel environmental stimulation responses in material performance, as well as novel processes for the synthesis of these advanced materials.
All accepted abstracts will be automatically invited to be extended to full papers for publication consideration in one of the following journals:
International Journal for Computational Materials Science and Engineering (Special Issue);
International Journal of Applied Mechanics (normal issues, special topics)
For any further request, please contact Prof. Zishun Liu:
zishunliu@mail.xjtu.edu.cn
Advanced engineered materials like mechanical and acoustic/elastic metamaterials exhibit remarkable engineering attributes that are absent in natural materials. Elastodynamic metamaterials, in particular, showcase distinctive dynamic and mechanical characteristics owing to their intricate internal structure, composed of an arrangement of unit cells. Over the past few decades, there has been a notable upswing in research endeavors within these realms, leading to captivating and stimulating discoveries.
The forthcoming phase of research is centered around the development of responsive and adaptive artificial structures, imbued with the ability to be programmed, reconfigured, and tuned in terms of their metamaterial properties. This entails the pursuit of properties that can be finely tuned and altered, reminiscent of biological structures. Leveraging innovative manufacturing techniques alongside advanced multiscale structural optimization methods, researchers are now able to design and produce metamaterials with unit cells that are adaptable and optimized for specific regions. Origami and Kirigami metamaterials are some examples.
Such metamaterials possess the capability to seamlessly adopt or modify their wave dispersion and mechanical properties when exposed to external stimuli, following a predetermined blueprint. This brings to mind the adaptability observed in biological entities. In most applications, effective tunability and switching the quasi-static and dynamic properties are highly desirable.
This minisymposium focuses on discussing the mechanics of advanced metamaterials governing robust physical phenomena that goes beyond the notion of lightweight structures. The goal of this minisymposium is to promote discussions between researchers working on the methods itself and researchers or practitioners applying those methods in new applications. This minisymposium welcomes, but not limited to, the following focused areas in mechanical and elastic metamaterials:
mechanics of acoustic/elastic metamaterials and phononic crystals;
architected structures/mechanical metamaterials including origami and kirigami based metamaterials;
mechanics and physics of acoustic/elastic metasurfaces;
new applications of acoustic/elastic metamaterials and mechanical metamaterials;
finite element analysis in mechanical and acoustic/elastic metamaterials;
modelling, numerical analyses and experiments on topologically protected interphase modes;
wavefront control on bi-stable and multi-stable metamaterials, particularly for origami and kirigami metamaterials;
modelling and computational simulations of seismic metamaterials; and
modelling, optimization and manufacturing metastructures for low frequency wave manipulation
Architected materials and structures offer a unique avenue for engineers to achieve customized properties and functionalities from conventional base materials. These innovative materials and structures find applications across a diverse range of fields, including optical, acoustic, and structural domains. The inherent versatility offered by these materials has naturally awakened the interest of both academia and industry as they strive to capitalize on their potential.
This mini-symposium seeks to bring together researchers working on different aspects of architected materials and structures, including topics ranging from theoretical and computational methods of analysis, methods for design and synthesis, to application and methods of fabrication.
Topics of interest include, but are not limited to:
Architected materials and structures for structural, acoustic, thermal, mechanical, biomechanical, electromagnetic, and other applications.
Methods for design of architected materials and structures, including data-driven and optimization techniques.
Nonlinear behavior of architected materials and structures.
Hierarchical architected materials and structures.
Bio-inspired architected materials and structures.
Knitted or woven architected materials and structures.
Adaptive, active, reconfigurable architected materials and structures.
Novel fabrication methods for architected materials and structures.
This minisymposium is intended as a forum for presentation and discussion of the results and problems related to mathematical modeling, numerical simulation and experimental testing of advanced materials and smart structures. Of great interest are innovative engineered cementitious composites with constituent phases, e.g. with shape memory materials (alloys, polymers), which provide the composites with special functionalities, e.g. the ability of self-healing or self-centering, or high damping capacity.
Different scales of observation (electron and optical microscopy, digital image correlation) and description via multiscale methods of averaging and computational homogenization can be considered, including microscopic, mesoscopic and macroscopic scales. The main goal is to find the constitutive relations, taking into account the influence of the interfaces between the constituent phases, to assess the effective properties of advanced materials and finally to build the computational model (digital twin) of the material and structures made thereof in the era of digital technology.
Presentation of both deterministic and stochastic models and solution techniques for the coupled chemo-hygro-thermo-mechanical processes as well as deterioration processes in advanced materials and smart structures are welcome within the framework of this minisymposium.
Architected materials, whose mechanical properties are dictated not only by their composition but also by their spatial architecture, have been increasingly adopted in engineering applications. By carefully designing their microstructures, researchers have created materials with unique and tunable properties, such as programmable linear and nonlinear elastic responses, wave guidance and attenuation, reconfigurability, and enhanced fracture toughness. This minisymposium aims at bringing together researchers working on the forefront of modeling and inverse design of such materials, as well as their connection to specific applications. Topics of interest include, but are not limited to:
Computational methods for modeling of periodic or non-periodic microlattices, woven materials, reconfigurable materials, bioinspired and cellular materials
Homogenization and multiscale modeling of architected materials
Inversely designed static and dynamic responses of architected materials, caused by external effects including waves and impact
Model-based and data-based inverse design of materials for structural, mechanical and soft robotics applications
Multi-physics modeling of architected materials with, for example, electro- or magneto-mechanical coupling
Computational methods for damage and failure of architected materials
0700 Fluid Dynamics and Transport Phenomena
Many problems in geophysical and environmental fluid mechanics exhibit a wide range of scales and must be solved over large, geometrically complex spatial domains, often for long periods of time. Computational methods for these types of problems have matured considerably in recent years. This minisymposium will examine the latest developments in solving geophysical and environmental fluid mechanics problems. Topics of interest include:
Model development and application.
Coupling of flow and transport processes and models.
High-performance computing and parallelization strategies.
Error analysis, verification and validation.
Unstructured mesh generation algorithms and criteria.
Fluid-structure interactions.
Novel discretization methods.
Stabilization Techniques
Efficient solver development and application
The mini-symposium is dedicated to the discussion of recent developments and applications in the field of numerical simulation of multiphase flow in porous media, encompassing petroleum reservoirs, aquifers, nuclear disposal, carbon storage, hydrogen storage, geothermal energy, transport of contaminants, poroelasticity and related disciplines, including new gridding, mesh adaptation, advanced numerical formulations, artificial intelligence methods, multiscale and multilevel methods. The goal is to bring together researchers, students, and professionals in the field of petroleum reservoir simulation and all areas involving porous media flows. The scope of the mini-symposium ranges from mathematical and computational methods to the modeling and simulation of challenging applications in multiphase flow in porous media.
This mini-symposium aims to provide a forum for sharing and discussing research works in the general area of multiphase flow and non-Newtonian fluid, especially those inter-disciplinary studies that cross the traditionary boundary between solids and fluids.
Many natural and industrial processes involve dynamic motions of both fluids and solids, forming a complex multiphase flow which is often further complicated by non-Newtonian / viscoelastic / viscoplastic behaviours, phase transitions, chemical reactions, and the presence of porous media.
Examples include welding and casting, 3D printing, polymer injection moulding, fresh concrete placement, oil and grease lubrication, debris flow, sedimentation, dust storm, and many more.
This session welcome, but is not limited to, the following topics:
Physical and mathematical models of multiphase systems and processes
Numerical modelling of multiphase flow and non-Newtonian fluids
Computer simulation of complex systems involving multiple fluids and solids
Numerical and experimental studies of materials and processes involving phase transition and/or chemical reaction
Applied studies that cross the traditional boundary of solids and fluids: 3D printing, injection moulding, concrete placement, debris flow, sandstorm, welding and casting, food processing, glass forming, and others.
This minisymposium covers any application of the state-of-the-art CFD (computational fluid dynamics) simulations for multi-physics problems in science and engineering. The topics of interest covers, but not limited to: reactive flows, multiphase/multiscale flows, Newtonian/non-Newtonian fluid flows, and turbulent flows. It serves as a forum to exchange ideas for the future development of this field. Emphasis will be on novel computational methods, leading-edge numerical simulations, and innovative attempts on applying deep machine learning. Recent advances in data-driven analytics and AI (artificial intelligence) push the boundaries of traditional disciplines including fluid science and engineering. Advanced CFD simulations in association with the machine learning may enable us to address new challenges to solve complex flow problems across both length and time scales as well as to achieve data-driven surrogate modelling that reduces computational load. Most welcome are contributions from students and young researchers working on computational fluid dynamics, concerned with unsolved or not yet fully satisfying solved problems and possible new attempts.
Fluid flow in porous media is ubiquitous in many natural and engineering applications, such as groundwater flow and contaminant transport, oil and gas extraction, soil remediation, filtration processes, and enhanced oil recovery techniques. Accurate descriptions of flow behavior in heterogeneous porous media with different surface properties and microstructures is often desired. We here establish a mini-symposium to provide a forum for sharing and exploring recent progress on the development and application of numerical methods for fluid flow in porous media.
Examples of specific areas include – but are not limited to – the following:
Development and application of numerical methods (direct numerical simulation, FVM/FEM, lattice Boltzmann method, smoothed particles hydrodynamics, pore-network models, etc)
Flow physics (Newtonian, non-Newtonian, miscible/immiscible, viscoelastic)
Passive/reactive transport (mixing, dispersion, dissolution, precipitation)
New physical insights and theoretical analyses
Frameworks for upscaling of effective properties (nano-scale, pore-scale, Darcy-scale descriptions)
Applications involving isotropic/anisotropic porous materials with heterogeneous porosity/wettability.
The aim of Model Order Reduction (ROM) is to efficiently simulate phenomena using a reduced computational framework, while retaining the high accuracy of standard discretization techniques. By extracting key knowledge from precomputed solutions (snapshots), ROM enables low-dimensional representations, reducing computational costs for parametric simulations in many-query scenarios.
This minisymposium focuses on presenting the latest advancements and insights in ROM strategies, with a specific emphasis on fluid dynamics. The objective is to identify the state of the art, address existing challenges, and explore future perspectives in the field.
The topics covered will include methodological developments in numerical analysis and model order reduction, with a strong focus on mathematical modeling and efficient approximations. Additionally, various applications such as inverse problems, optimal flow control, shape optimization, bifurcating phenomena, and uncertainty quantification will be discussed. The presentations will also touch upon advances in dealing with complex physical systems in multi-physics contexts, fluid-structure interaction, and more general coupled phenomena.
Tackling such complex applications led to a growing interest of the ROM community into novel methodologies based on non-intrusive (also called data-driven) approaches. Given the rise of machine learning enhanced reduced models, we also aim at connecting with researchers bringing new perspectives on efficient methodologies for computational fluid dynamics simulations.
The goal of this minisymposium is to foster idea exchange and encourage fruitful discussions across a wide range of academic, industrial, medical, and environmental applications. By bridging the gap between high-performance computing, advanced reduced order modeling and real-time computing, we aim at bringing together researchers actively involved in the development of novel reduced strategies in computational fluid dynamics to stimulate interactions and potential collaborations.
This minisymposium covers any computational modeling of transport phenomena in micro/nanofluids. Numerical modeling and fundamental understanding of these phenomena are crucial for improving performance of technologies in areas such as biology, medicine, geology, and energy storage, to name a few. The topics of interest include, but are not limited to, fluidic, ionic, and particulate transport at micro/nanoscale and numerical methods for simulating flows and soft matters at small scales. Recent advances in modeling capabilities such as molecular dynamics, mesoscopic methods (dissipative particle dynamics, lattice Boltzmann method), and continuum simulations, combined with data-driven approaches, has led to considerable growth in predicting and understanding transport phenomena in micro/nanofluids. This minisymposium serves as a forum for discussing recent advancement, exchanging ideas, and exploring collaborations for future developments in this field. Contributions from students and young researchers working in this field are also welcome.
Computational Fluid Dynamics (CFD) with well-established numerical schemes such as finite difference, finite volume, and finite/spectral element methods, along with experimental measurements and analyses, have long been the cornerstones of research in fluid mechanics and engineering applications. They offer valuable insights into fluid flow phenomena. Turbulent flows, characterized by the nonlinear interaction of a wide range of spatial and temporal scales, are now better predicted thanks to High-Performance Computing (HPC) and improved turbulence modeling and dynamics learning.
However, the enormous amount of potentially noisy data generated, the modeling complexity of multi-scale/phase/physics high-dimensional real-world problems, and the need for parametric explorations call for a shift in paradigm in which scientific machine learning (SciML) plays an increasingly important role alongside more traditional physical modeling. This becomes particularly relevant for multi-query approaches required for uncertainty quantification, robust design optimization, reduced-order modeling, manifold learning, and adaptive control in fluid flow applications.
Recent AI techniques have demonstrated their potential for data-driven modeling in the era of big data, enhancing efficiency and providing data-driven insights. In this mini-symposium (MS), we are also keenly interested in physics-informed modeling and the combination of these techniques at the interface between traditional methods and emerging data-driven approaches. Furthermore, we aim to understand how essential CFD components, such as adaptivity, mesh refinement, error control, multi-fidelity, super resolution, HPC efficiency, and storage, transition into the era of scientific machine learning.
In this MS, all classes of machine learning methods, including Deep Neural Networks (DNN), Convolutional Neural Networks (CNN), AutoEncoders, U-Net, Reinforcement Learning (RL), Generative Adversarial Networks (GANs), and Reduced-Order Models (ROMs), will be very welcome, especially if they incorporate physical knowledge. Envisioned applications are diverse and include the development and calibration of turbulence modeling, simulations of shocked flows, wall modeling, active flow control, aerodynamic and aeroelastic optimization, modeling biological and cardiovascular flows, fluid-structure interactions, and uncertainty quantification in fluid flow phenomena.
Particle-laden flows refer to a kind of two-phase fluid flow in which one of the phases is continuously connected and the other phase is made up of small immiscible particles. Particle-laden flows modeling has a wide variety of scientific and engineering applications: dispersion of contamination in the atmosphere, fluidization in combustion processes, deposition of aerosols in aerosol drugs, spread of virus in the air, rain formation in clouds, sand and dust storms, protoplanetary disks, volcanic eruptions, geological sedimentation processes, pharmaceutical sprays, liquid-fueled combustion, solid rocket motors, coal furnaces, and particle-based solar receivers are examples of engineering processes that involve particle-laden flows among many others.
The multiscale and nonlinear interactions between the carrier and the dispersed phases lead to complex flow physics and pose unique modeling challenges. Also, many of these flows involve turbulence. The simultaneous presence of two of the most challenging topics in fluid mechanics, namely multiphase flows and turbulence, is still an unsolved problem.
The study of particle laden flows is also the basis for the simulation of active fluids, in which the particles themselves move with their own energy.
This Mini Sympsium hopes to bring together different researchers in the modeling of fluids with particles, from the simplest cases where the particles simply convect, to the most complicated cases where the particles move at different velocities of those of a turbulent fluid.
Advanced numerical methods enable us to delve into intricate flow phenomena within complex geometries, all while navigating through various temporal and spatial scales. One prime illustration of such flows resides in pore-scale porous media (PSPM). Within this realm, the intricate interplay of geometry and physics compels us to tackle nonlinear equations, like the Navier-Stokes (NS) equations, while incorporating pertinent boundary conditions. Directly observing multiphase flow in complex pore structures within experimental setups is the ideal approach for studying physical phenomena in porous media. However, the complexity of pore space geometry and connectivity can make this impractical. In such cases, one can explore the microscopic flow behavior in porous media through pore-scale modeling, which employs particle-based techniques, grid-based computational fluid dynamics (CFD) models, and process methods.
In certain scenarios, where boundary conditions are mutable, as in cases involving variable ambient temperatures or multiphase/multicomponent flows within PSPM, the complexity of the problem escalates significantly. Moreover, when dealing with complex geometries or elaborate case studies, the sheer volume of data generated presents distinctive challenges. The post-processing and analysis of such data necessitate the application of innovative data processing techniques, all while being cognizant of the constraints imposed by computational resources.
In recent years, the fusion of data science and numerical simulations has opened up fresh perspectives for addressing the challenges posed by big data. Researchers have amalgamated machine learning, deep learning, and artificial intelligence with CFD simulations and experimental data to forecast, post-process, and scrutinize the behaviors of intricate systems. As such the goal of this mini-symposium is to gather experts in the field of PSPM to discuss the following subject:
Innovative CFD methods for porous media simulation
Advance experimental measurements in porous media
Use of machine learning in porous media
Validation and tuning of the porous media data set
Fluid flow, and heat and mass transfer in porous media
Meshless numerical approach for simulating packed beds of particles
Nanofluid and Nanoparticles transport phenomena in porous media
Diverse application of porous media (heat exchangers, geothermal heat source, fuel cells, carbon capturing, energy storage systems, biofuels, particulate filters, biomedical, biomaterials, etc.)
The lattice Boltzmann method (LBM), which solves a specific discrete Boltzmann equation designed to reproduce the continuous Navier-Stokes (N-S) equations in the low-Mach number limit, has been increasingly applied as a very powerful numerical model for various complex flows and transport phenomena. The mesoscale nature of LBM allows its natural incorporation of micro- and mesoscale physics, leading to straightforward treatment of multiphase/multicomponent interfacial dynamics. The bounce-back type of boundary schemes in LBM is very suitable for flows in complex geometries, e.g., porous media. In addition, the canonical “collision-streaming” algorithm disentangles non-linearity and non-locality, i.e., the nonlinear collision operator is entirely local and the non-local streaming is linear towards the discrete distribution, making it highly efficient in large-scale parallel computations. Due to the above-mentioned strong advantages, LBM has drawn a lot of attention in the past three decades and has been developed into a powerful numerical approach for simulating fluid flows and solving nonlinear problems.
The mini-symposium is dedicated to the discussion of recent developments of LBM and its applications to various complex flow problems, including but not limited to:
New collision schemes, forcing schemes, and boundary schemes in LBM
Improved multiphase/multicomponent LBM
Coupling LBM with other numerical methods (pore-network method, discrete element method, etc.)
Lattice Boltzmann study of multiphase flows
Lattice Boltzmann study of flows and transport phenomena in porous media
Lattice Boltzmann study of phase-change heat and mass transfer
Quantum algorithms for Lattice Boltzmann equation
0800 Numerical Methods and Algorithms in Science and Engineering
Computational modeling of friction and wear presents significant challenges due to the presence of multiple sources of nonlinearity, including geometry, material, and contact nonlinearity. Moreover, rough surfaces in contact exhibit roughness at multiple length scales, ranging from the atomic level to engineering components. These challenges are further complicated by the occurrence of multiple mechanical phenomena, such as adhesion, plasticity, and fracture, which display scale dependency.
To address these challenges, this mini-symposium aims to foster interdisciplinary collaboration among researchers in computational mechanics, solid mechanics, and data science. The goal is to develop numerical methods and models that can capture the material surface and bulk phenomena required for a better understanding of the mechanics of friction and wear at different length scales. The focus will be on the latest numerical developments for modeling friction and wear-related phenomena.
Topics of interest include, but are not limited to, the following areas:
Modeling elastic and inelastic deformation of rough surfaces in contact
Modeling surface/bulk damage and crack propagation during sliding contact
Modeling and simulations of adhesive contact, friction, and wear
Development of new continuum and discrete numerical techniques for contact mechanics
Modeling friction and wear through a data science approach such as machine learning
Structural responses under extreme and environmental conditions, such as impact, penetration, explosion, high-speed machining, and manufacture and surface treatment under high temperature and high pressure, have been paid wide attention in the recent years because of the interesting and important phenomena involved and the great challenges to computer modelling and simulation. As localization, fracture, fragmentation and phase transformation occur, the multi-scale and multi-physics phenomena should be fully considered, and new theories and numerical methods are needed to model and simulate the structural responses under extreme loading conditions in accurate and effective ways. This minisymposium aims at providing an opportunity for academic researchers and industrial engineers in the related fields to discuss the recent progress and to promote collaboration. Those who have been working on in the related fields are cordially invited to exchange their ideas and research results in this minisymposium. Presentations are solicited in all subjects related to the model-based simulations of structural responses under extreme conditions, which include but are not limited to the followings:
Development and improvement of advanced numerical methods, such as meshfree particle methods, X-FEM, boundary-type methods, isogeometric methods, discrete element methods, peridynamics, machine learning enhanced approaches for modelling and simulation of structural responses under extreme conditions
Simulation-based disaster prediction and mitigation
Efficient and accurate impact-contact algorithms
Multi-scale and multi-physics modelling schemes
Numerical methods and coupling algorithms for multi-physics processes
Parallel algorithms and large-scale computation for the problems with extreme loading
Coupled Lagrangian-Eulerian schemes for the problems with moving boundaries
Inverse solutions and optimization in the problems with extreme loading
Verification, validation, and software development
Numerical algorithm implementation and simulation software development
Other related subjects
The meshfree (meshless) methods offer flexibility in constructing spatial approximations without the need for element connectivity. With this advantage, meshfree methods have been developed and investigated in various research areas in recent years, for example, isogeometric analysis, nonlinear and large deformation analysis, inverse problems, peridynamics, geomechanics, biomaterial modeling, fluid dynamics, extreme events modeling, solid-fluid interaction, and the recent popular machine learning techniques. To date, many meshfree methods have been proposed, such as the smoothed particle hydrodynamics, element free Galerkin method, reproducing kernel particle method, material point method, generalized finite difference method, strong form collocation methods, peridynamics, and even the physics informed neural network type approach in machine learning, to name a few. The objective of this minisymposium is to hold a forum to report the recent developments and applications of meshfree methods by researchers from engineering, mathematics and industries. Topics related to computational mechanics and mathematics in meshfree methods as well as industrial applications using meshfree methods are cordially invited to contribute to this minisymposium.
The minisymposium is intended as a forum for exchange and debate on advanced multiscale (in its widest sense) computational methods for studying the behaviour of materials and structures. The aim is to bring together researchers (engineers, physicists, mathematicians) specialising in computational mechanics and numerical modelling for the simulation of solid mechanics and structures. Papers may cover a wide range of numerical aspects related to the modelling of materials or structures. In this context, the interest and relevance of multiscale and/or adpative strategies will be highlighted. The focus will be on computational issues, while highlighting the underlying conceptual and theoretical foundations. Application topics will include (but not be limited to):
Heterogeneous media;
Localized effects;
Adaptive Mesh Refinement;
Multilevel or multimodels strategies;
Complex material behaviour;
Interface and contact problems;
Non-standard continua;
Coupled problems
Keywords : multiscale, multilevel, adaptive mesh refinement, Solids Mechanics, non-linear problems, advanced numerical methods.
Interface problems are widespread in multiphysics models in science and engineering. Applications include, for example, modeling the interaction of blood flow and blood vessels, the spread of contaminants in multiple subsurface structures, ground/ocean/atmosphere dynamics, and wing fluttering problems. The computation of these interface problems, that couple different physical models across interfaces, present many mathematical and algorithmic challenges. The focus of this minisymposium will be recent advancements in topics including, but not limited to, monolithic and partitioned discretization and solution methods, stability and error analysis, and high-order timestepping schemes.
Recent developments in the analysis of nonlocal systems of equations showcase the importance of studying solutions that may exhibit discontinuous, singular, or irregular behavior. Within the scope of this session we will focus on numerical methods and algorithms for Partial-Integro Differential Equations (PIDEs) with applications in computational engineering, as well as theoretical and modeling aspects that contribute to the understanding of these systems. The participants in this session will present new contributions to these areas, including results in the well-posedness and regularity theory, stavility results, decompositions properties, connections with classical theories. The participants will also discuss new challenges and opportunities in the nonlocal framework, as well as future directions of research. The organizers will invite a diverse group of speakers with participants from underrepresented groups, females, and from early-career to senior members of the community. The aim of this minisymposia is to leverage the discussion between the numerical and theoretical community.
This MS is dedicated to any researches on boundary element methods (BEMs) nowadays. Not only theoretical and numerical investigations but also scientific/industrial applications are welcome.
This minisymposium is devoted to exchange of ideas for improving simulation tools for wave propagation based on finite elements. Numerical techniques of interest include spacetime methods, reduced order models, nonlinear materials, novel time stepping schemes, tent-based schemes, Trefftz methods, infinite-domain truncation, and fast solvers. All application areas involving waves are welcome, especially electromagnetics, photonics, metamaterials, plasmonics, high energy lasers, optical fiber amplifiers, gravitational waves, acoustics, seismic applications, elastodynamics, water waves, forward and inverse scattering, and multiphysics wave problems.
Join us for this mini-symposium as we bring together esteemed experts from around the globe to discuss the latest breakthroughs in numerical modeling of granular and multi-phase flows. This symposium will be a platform to explore the cutting-edge advancements in various aspects, covering granular flows, gas-solid flows, solid-liquid flows, and gas-solid-liquid flows.
This symposium warmly welcomes submissions featuring innovative methods related to the Discrete Element Method (DEM), Smoothed Particle Hydrodynamics (SPH), Volume-of-Fluid (VOF), and Lattice Boltzmann Method (LBM), among others.
Please don't miss the opportunity to engage with experts in the field and gain insights into the state-of-the-art modeling techniques that drive advancements in granular and multi-phase flows.
Thin-walled structures have been widely used in aerospace industry, the load-carrying capacity of which is significantly influenced by buckling. A reliable prediction of buckling phenomena requires a robust, efficient and accurate analysis tool and consideration of a number of inherent structural imperfections which often dominate the overall non-linear elastic response. Furthermore, they are also key factors for an innovative and sustainable thin-walled structural design which exploits the full lightweight potential.
Nowadays, new theories and computational methods for buckling problems are constantly emerging. With the rapid development of artificial intelligence, a series of intelligence technologies, such as data-driven and digital twins, have been successfully applied into engineering structural analysis and design.
This mini-symposium aims at bringing together researchers from across the structural buckling community to discuss and exchange latest achievements in the field of novel methods for buckling analysis and design of thin-walled structure research. Topics of interest include, but are not limited to computational and algorithmic aspects of the analytical and semi-analytical methods, numerical methods and various data-driven intelligence methods.
Pioneered by Francfort and Marigo, the phase-field theory avoids the tracking of mobile sharp crack surfaces, replacing the otherwise implied boundary conditions with a partial differential equation that governs the evolution of time- and space-dependent functions (phase-field). The governing equation of the phase field is devised to guarantee that the motion of the diffused interfacial region complies with the prescribed physical laws, e.g., Griffith’s brittle fracture theory. In this sense, the phase-field theory indeed serves as a regularized mathematical interface, which incorporates the complex sharp crack surface into a spatially smooth continuum scheme and facilitates finite-element-based numerical implementations in a straightforward manner. In addition to the theory’s successful application to classical mechanical problems, e.g., composite delamination, functionally graded materials, rock fracture, large strain fracture of polymer, and interfacial fracture of concrete, the theory has been extended to many multi-physics problems, including hydrogen embrittlement, cement hydration, stress corrosion, Li insertion, coupled fluid–structure fracture, and polymer oxidative aging, etc.
We organize this symposium to discuss recent progress in various aspects of the PFM for fracture. Topics of interest include (but not limit to):
Phase-field model for fracture in multi-physical problems
High-performance computing strategies for phase field approach
Multi-scale modeling involving the phase field approach
Complex fracture problems by the phase field approach
Engineering computing and application of phase field approaches
High-fidelity simulations have become indispensable in design and analysis of complex physical systems. However, these simulations present two major challenges for real-world applications: 1. they remain computationally expensive and infeasible for many-query upstream tasks such as optimization and uncertainty quantification; and 2. the ability of these simulations to accurately describe physical phenomena, such as transient and chaotic dynamics, is limited and accompanied with high uncertainties. In this regard, recent progress in data-driven methods for complex physical systems is receiving growing attention as a potential path to address these two major challenges. This mini-symposium focuses on challenges, advances, and prospects in model reduction, data assimilation, and uncertainty quantification for complex physics problems and will provide a platform for discussion and interaction between researchers working on different aspects of the techniques.
While physical simulation has become an indispensable tool in engineering design and analysis, many real-time and many-query applications remain out of reach for classical high-fidelity analysis techniques. Model reduction is one approach to reduce the computational cost in these applications while controlling the error introduced in the reduction process. In this mini-symposium, we discuss recent developments in model reduction techniques. Topics include, but not limited to, nonlinear approximation techniques; high-dimensional problems; hyperreduction methods for nonlinear PDEs; data-driven methods; incorporation of machine-learning techniques; error estimation and adaptivity; and their applications to optimization, feedback control, uncertainty quantification, and inverse problems in fluid and structural dynamics, with an emphasis on large-scale industry-relevant problems. The minisymposium will bring together researchers working on both fundamental and applied aspects of model reduction to provide a forum for discussion, interaction, and assessment of techniques.
The objective of this symposium is to discuss new advances in numerical methods for linear and non-linear time-dependent and time-independent partial differential equations used in mechanics. Topics of interest include, but are not limited to: new space and time discretization methods; high-order accurate methods with conformal and unfitted meshes including finite, spectral, isogeometric elements, finite difference methods, fictitious domain methods, meshless methods, and others; special treatment of boundary and interface conditions on irregular geometry; new time-integration methods; adaptive methods and space and time error estimators; comparison of accuracy of new and existing numerical methods; application of new numerical methods to engineering problems; and others.
Materials modelling and simulation serve as essential pillars, driving the exploration and creation of groundbreaking materials and chemicals, especially in critical sectors such as energy and biomedicine. The landscape of materials science presents a diverse array of challenges intricately intertwined with complexities that demand the meticulous scrutiny and exactitude of computational mathematics. Alongside these complexities, mathematicians play an indispensable role, harnessing their analytical prowess to establish solid theoretical underpinnings, propelling the advancement of this multidisciplinary arena.
The proposed minisymposium is firmly rooted in a visionary pursuit: fostering a vibrant platform for scholarly discourse and cooperative cross-fertilization among researchers profoundly engaged in the dynamic and swiftly progressing realm of mathematics in materials science. This symposium aspires to transcend traditional disciplinary boundaries, bringing together a diverse array of perspectives to engage in thoughtful deliberation on the intricate tapestry of mathematical exploration within the context of materials science. At its core, this dialogue is centred on the investigation of numerical methodologies and mathematical modelling, serving as pivotal tools for unveiling the intricate dynamics governing material behaviour, thus offering insights into their macroscopic manifestations emerging from the quantum realm. Moreover, the symposium is dedicated to illuminating the crucial role played by state-of-the-art machine-learning techniques in this domain. As the frontiers of materials science extend to embrace innovative artificial intelligence and data-driven approaches, the symposium aims to provide a platform for the convergence of these innovative methodologies with a solid foundation of mathematical understanding.
Both hyperbolic and parabolic time-dependent problems have been of great interest in the applied mathematics and engineering communities as they cover a wide range of applications. To improve accuracy both in space and time, several high order methods have been developed in the recent years. However, with today's exascale computing architectures we also aim to solve these problems effectively, not just accurately. Many methods have achieved great success in parallelization, such as parallel-in-time and space-time methods, among many others.
In this minisymposium we aim to present the state-of-the-art theoretical and application based results, and bring the members of this community closer to each other. Methods of interest include, but are not limited to, space-time discontinuous Galerkin, space-time finite element, implicit-explicit methods, asynchronous, parallel-in-time and adaptive mesh refinement.
Presentations regarding exascale or highly parallel implementations (such as application of GPU platforms) are also welcomed.
The objective of this minisymposium is to bring together researchers working on discretizations of nonlinear partial differential equations (PDEs) with provable properties such as nonlinear stability with an emphasis on robust schemes and applications towards industrial strength cases. The focus will be on the use of high-order methods for high-fidelity simulations; where traditionally the stated methods have suffered from lack of stability and efficiency. In this minisymposium, we look broadly at nonlinear PDEs and the mathematics required to develop efficient high-order schemes with provable properties. Example PDEs of interest include but are not limited to, incompressible and compressible flow equations, multiphase equations, nonlinear wave equations, and nonlinear reaction diffusion equations. The sessions will be scheduled in such a manner where the first one or two sessions will present state-of-the-art development on topics related to, for example, nonlinear stability, postitivity preserving, time-integration, and shock-capturing; while the subsequent sessions will present research on the application of such methods on industrial-relevant cases.
Digital Twins are rapidly becoming a key enabling technology that capitalizes on decades of investment in computational modeling to bring about capabilities beyond forward simulations such as dynamic data assimilation and data-driven decision making informed by system-specific analysis.
This minisymposium will provide a forum for exchange of ideas spanning foundational DT technologies such as data-driven, reduced order and surrogate models, advanced couplings, and data assimilation. Also of interest is the rigorous and agile coupling of arbitrary combinations of data-driven and conventional methods, particularly in relation to the heterogeneous physics, multifidelity, and multiscale components that constitute at DT.
The main themes of this session include:
nonlinear dimensionality reduction
preservation of topological, structural, and qualitative properties under model order reduction
development of deep learning surrogates
couplings between data-driven, first-principles models, and multi-fidelity models
software architectures supporting the DT paradigm
This minisymposium will bring together researchers working on developing, analyzing, and deploying novel discretizations of partial differential equations (PDEs) and data-driven models, which preserve the key structural properties of the continuous PDE solutions. Examples of such properties include local conservation of mass, satisfaction of divergence and curl conditions, preservation of symmetry and topology, satisfaction of maximum principles and entropy conditions, etc.
The minisymposium will feature novel approaches to structure preservation based on numerical optimization, residual redistribution, topological data analysis, entropy filtering, shock capturing, mesh correction, and other related techniques. Another important goal is to showcase recent advances in new and developing discretization techniques from physics-informed machine learning where data-driven models incorporate structure either by construction in the learning architecture or by novel choice of loss function; in this setting, we aim to identify models from data, which preserve analogs of classical structure-preserving PDE discretizations. Through this minisymposium, we hope to highlight the close relationships between various approaches under development, thus facilitating a deeper mathematical understanding and a broader use of modern structure-preserving methods for PDEs.
In the realm of scientific and engineering simulations, the quest for enhanced accuracy, computational efficiency, and robustness remains paramount. This mini-symposium aims to delve into recent developments in numerical methods and algorithms, specifically tailored for Computational Fluid Dynamics (CFD) and Fluid-Structure Interaction (FSI) problems across diverse fields. The symposium emphasizes crucial aspects, covering advanced discretization techniques, reduced order models, high-performance computing strategies, and the integration of machine learning for data-driven simulations. These advancements hold the potential to revolutionize simulation efficiency and fidelity across aerospace, automotive, energy, biomedical, and environmental engineering domains, among others.
The mini-symposium will provide a platform for researchers to exchange insights on novel discretization schemes that transcend traditional methods. Innovations in finite element, finite volume, spectral, meshless, and non-matching methods will be explored, demonstrating their efficacy in capturing complex flow phenomena and accurately resolving fluid-structure interactions.
Reduced order models (ROMs) constitute another critical facet of this symposium. State-of-the-art ROM methodologies enable rapid yet accurate approximations of high-dimensional systems, significantly accelerating simulations while maintaining reliability. The discussions will encompass techniques like Proper Orthogonal Decomposition (POD), Dynamic Mode Decomposition (DMD), and more.
Given the escalating demand for simulation realism and expedited analyses, high-performance computing (HPC) strategies have assumed paramount importance. Presentations within this symposium will expound on domain decomposition techniques, inexact solvers and preconditioners, parallelization, GPU utilization, and other HPC paradigms that amplify computational throughput without compromising solution quality.
Furthermore, this symposium recognizes the metamorphic role of machine learning in simulation. Data-driven techniques powered by machine learning algorithms are reshaping how simulations are performed, leveraging real-world data to enhance accuracy and optimize computational resources. Researchers will share insights into integrating machine learning models with simulations, addressing challenges, and harnessing the power of neural networks, reinforcement learning, and generative adversarial networks in driving simulations to new frontiers.
In summary, the mini-symposium underpins the convergence of computational mechanics, fluid dynamics, structural analysis, and machine learning, across many vital fields. By spotlighting advanced discretization techniques, reduced order models, high-performance computing strategies, and data-driven simulations, this symposium offers a forum to explore transformative techniques that transcend existing approaches.
The potential of quantum computing to solve scientific and engineering problems has been recognized over the past decade. The power of quantum computers stems from the efficiency in computational time and space for difficult problems by taking advantages of quantum superposition and entanglement. Quantum algorithms have been developed to solve engineering problems such as linear systems, eigenvalue, optimization, machine learning, and simulation. This minisymposium is to provide a platform for researchers to exchange the latest ideas of quantum scientific computing to solve engineering and materials problems. The topics of interest include but are not limited to:
Quantum algorithms for computational solid and fluid mechanics
Quantum computing for multiscale and/or multiphysics problems
Quantum optimization algorithms
Quantum algorithms and methods for materials discovery and materials design
Uncertainty quantification for quantum computing
Quantum machine learning and its combination with computational mechanics
New computing architecture for noisy intermediate-scale quantum computers (e.g., tensor networks)
Control mechanisms and error corrections for quantum computing
Simulators of quantum computer on classical computing platforms
Design and optimization of quantum computer systems
Benchmark studies of quantum algorithms and quantum computers
This mini-symposium will emphasize a range of analytical, computational and experimental approaches, which can be applied to the solution of inverse and optimization problems in science and engineering [1-4]. Contributions dealing with practical applications are encouraged, such as in mechanics, civil engineering, aeronautics, bio-medicine, transport and sensing of pollutants, materials design and processing, remote sensing, non-destructive evaluation, meta-models for high-dimensional problems, deep learning algorithms, etc. The following list covers some of the topics to be presented at this mini-symposium. Papers on other subjects related to the themes of this symposium are also welcome.
Inverse Problems: Mechanics, Aeronautics, Vehicle engineering, Civil engineering, Material science, Damage detection, Fault diagnosis, Heat and mass transfer, Acoustics, Imaging, Bio-medicine, Electromagnetism, Geophysics, Transport and sensing of pollutants, Non-destructive evaluation, etc.
Numerical Algorithms: Ill-posedness analysis and Regularization techniques, Semi-inverse problems and methods, Large-scaled inverse problems, Sensitivity analysis, Evolutionary algorithms, Geometric problems, Determination of boundary and initial conditions, Dynamic load identification, etc.
Optimization Design: Design sensitivity analysis and global optimization, Shape and topology Optimization, Meta-models for high-dimensional problems, etc.
Data-driven Based Algorithms: Data analysis, Signal and noise processing, Pattern recognition, Identification based on machine learning, Deep learning algorithms, Data assimilation methods, Machine learning based optimization, etc.
Composite materials continue to attract interest from advanced Industries, mainly because of their advantageous mechanical characteristics. Nevertheless, many problems related to optimization, design, and verification of composite structures remain unsolved, primarily because of the lack of appropriate methodologies and analysis tools.
Nowadays, multi-scale simulation is a popular method for analyzing and predicting, for example, the failure mechanisms and, eventually, for taking into account the intricate architecture of composite materials at their different scales. Multi-physics problem in advanced and smart composites in addition to the composites applications in extreme environments are adding to the complicacy of the computational modelings. Sensitivity analysis and uncertainty quantifications integrated with multi-physics and multi-scale simulations needs to be used to obtain more reliable designs. Albeit multi-scale simulations may soon become computationally prohibitive, and the problem domain is, in general, limited to small portions of the structure or representative units, breakthrough researches are emerging in this domain.
The mini-symposium “Multi-scale and machine learning-based modeling methods for optimization and design of composites” aims at outlining the state-of-the-art and the perspectives of the research in the field of simulations of advanced composites materials and structures. Scientists are invited to share new research ideas and results about all aspects of the modeling and design of composites. Topics of interest include but are not limited to micro-mechanics, meso- and multi-scale analysis, beam, plate, and shell models, sensitivity analysis, robust design and optimization, crack initiation and propagation, impact analysis, aging, multi-field, and multi-physics problems. Attention will also be given to the use of (physics-based) machine-learning algorithms for more accurate predictions of damage and mechanical responses, solving inverse problems, optimization, defect quantification and characterization, and best theory selection.
Peridynamics modeling has been effectively used to predict material failure and damage in many applications, and it was successfully compared against various experiments. However, some theoretical and numerical understanding of simulation results, e.g., crack nucleation, is still missing. The purpose of this symposium is twofold. First, current developments in theoretical efforts will be presented, one objective being to understand better the possible synergies between peridynamics and more classical approaches as well as the way they can be used concurrently or not. Second, advances in computational efforts in recent years will be discussed, and the observed challenges will be highlighted. By combining theoretical and applied presentations, this symposium aims to strengthen the synergy between researchers working on analytical and numerical methods and discuss current challenges and open questions in the field of peridynamics.
Accurate dynamical models of real-world phenomena have become indispensable tools in numerous scientific and industrial applications such as vibrational analysis and control of mechanical systems, shape optimization, and digital twins. In some cases, one has access to explicit representations of these models resulting from, for example, a semi-discretization of the corresponding partial differential equations (PDEs). Due to the need for greater accuracy, the resulting models are generally rather complex with millions of degrees of freedom, making their simulation a formidable challenge to be used in real-time applications. This motivates the need for model reduction: Given the large-scale model, construct easy-to-simulate reduced models whose behavior is guaranteed to approximate the original one.
These large-scale mathematical models (dynamical systems) naturally inherit many physical properties of the phenomena they represent, encoded in the internal differential and nonlinear structures of these dynamical systems. Typical differential structures include higher-order time derivatives occurring in structural mechanics and acoustics, time delays that account for incompleteness in the modeling process, or integral terms that are often used in conservation laws. Therefore, it is vital that the resulting reduced models retain these properties so that they are physically meaningful surrogate representations. A typical approach for the reduction process is the use of a Petrov-Galerkin projection of the original dynamics. This is referred to in the literature as “projection-based (intrusive) structure-preserving model order reduction”.
However, in some settings one does not have access to an explicit (state-space) representation of the underlying dynamics. Instead, one has an abundant amount of input-output data, either in the time or frequency domain. In these instances, the goal is to “learn a structured model” directly from the data such that the learned model inherits the relevant differential structures (without having explicit access to them). This is what we refer to as “data-driven (non-intrusive) structure-preserving modeling”.
This minisymposium (MS) will focus on both intrusive as well as non-intrusive (data-driven) approaches for approximating structured dynamical systems. Talks in this MS will focus on both theory and real-world applications and will bring together the classical engineering modeling knowledge with state-of-the-art approaches from numerical analysis for the efficient design of structured dynamical systems. Topics covered by this MS will be:
Structural modeling of dynamical systems,
Intrusive (projection-based) structure-preserving model order reduction,
Data-driven (non-intrusive) structure-preserving modeling,
Modeling of structured nonlinearities,
Hamiltonian and port-Hamiltonian systems,
Data-driven vibrational and system analysis.
Machine learning has gained increasing attention in the field of numerical modelling. Fuelled by data, it provides researchers with powerful computing tools and has already led to significant innovations. However, in many real-world engineering and science applications, data scarcity can pose significant challenges for machine-learning-driven numerical modelling, hindering its practical implementation. Recent advancements in ‘physics-informed machine learning’ have enabled incorporation of guidance from ‘physics’, such as governing equations and boundary conditions, into machine learning inspiring a transition away from sole reliance on data. Physics-informed machine learning methods have demonstrated the ability to use ‘physics’ as a remedy to insufficient data, resulting in superior performances in terms of accuracy and robustness, specifically for applications with increased complexity and non-linearity. In such a context, this mini-symposium aims to foster a rich and comprehensive dialogue at WCCM 2024 about latest advancements in physics-informed machine learning for numerical methods in engineering and science.
Meshfree and particle methods have been developed in the field of computational mechanics by taking advantage of their robustness against dynamic changes in free surfaces and propagation of discontinuities. While the advantages of these methods derive from their meshless nature, these features can conversely pose difficulties in the treatment of boundary conditions and in problems of multiphase flows with high density ratios. The purpose of this mini-symposium is to provide discussions for researchers of the meshfree and particle methods to share their recent knowledge and advanced insights. The topics are mathematical theory, discretization schemes, multi-resolution techniques, multi-physics analysis, boundary conditions, accuracy, adaptive analysis, parallel processing, large scale analysis, applications, verification and validation etc. for the particle methods.
Materials defects and inhomogeneities, such as point defects, dislocations and grain boundaries play essential roles in the mechanical and dynamical behaviors of the materials. The complexity due to the multiscale and stochastic nature of the structures and the evolutions of these defects and inhomogeneities present challenges for mathematical modeling, analysis and numerical calculations. New models based on multiscale approaches and data-driven methods are required to describe the complicated phenomenon associated with defects and inhomogeneities in materials accurately and efficiently. Detailed analysis and advanced numerical algorithms are also important to guarantee the convergence, consistency and efficiency of these new models. Speakers in this mini-symposium will discuss recent advances in modeling approaches, analysis techniques and numerical methods in the understanding of material defects and inhomogeneities.
For an accelerated materials design, digital microstructures can be used, which exhibit the same characteristics as experimentally measured microstructures, but are obtained through microstructure modeling and simulation. This requires the application and development of advanced materials modeling and simulation techniques. In particular, numerical methods based on the phase-field method have become indispensable and extremely versatile tools in materials science, microstructure mechanics, and physics. The method typically operates at the mesoscopic length scale and provides important information about morphological changes in materials by mapping interfacial motions of physically separated regions. It provides a numerically highly efficient treatment of the moving interfaces, as no explicit tracking of the interfaces is necessary. Thus, the phase-field method is widely established for modeling microstructural evolution processes, such as solidification, solid-solid phase transition, growth and coarsening of precipitations, grain growth and martensitic phase transformation. An outstanding feature of the phase-field method is the ability to consider different physical driving forces for interfacial motion due to diffusive, electrochemical, thermomechanical, etc. processes. In addition, large-scale numerical simulations can be performed by numerically solving the coupled multiphysics differential equations on high performance clusters. Due to this versatility, phase-field methods, used in a wide range of fields in materials science and physics, are constantly under development. The main objectives of this WCCM symposium are to establish cross-community standards for phase field modeling by
Highlighting current issues, emerging applications, and outstanding perspectives in phase field modeling
Identifying methodological commonalities among different phase field communities
Discussing analytical challenges of phase field modeling in a transparent manner
Establishing benchmarks for the verification of models and implementations
Slender structures are used as primary components in a wide range of engineering applications. Their spread is further encouraged by the advent of new materials that enable the design of highly optimized shapes. In such structures, the mechanical response is driven by geometrical nonlinearities while the material may behave elastically and inelastically. Complex and highly nonlinear responses frequently characterizes their mechanical behaviour and imperfections can radically influence their stability. Consequently, developing numerical approaches that offer robustness, efficiency and accuracy in analyzing slender structures is a research topic of high interest in computational mechanics, involving modelling, discretization methods and nonlinear solvers. Based on these premises, this mini-symposium aims to bring together scientists worldwide working on advanced methods for the geometrically nonlinear analysis of structures used in civil, mechanical, marine, aerospace, and biomedical engineering applications.
Therefore, contributions may involve the following aspects
Enhanced structural models for beam, shell and solid structures undergoing large deformations.
Discretization methods as strong formulations (i.e., collocation and differential quadrature method, inverse differential quadrature method), weak formulations (i.e., finite element method, boundary element method, isogeometric analysis).
Advanced computational methods to evaluate the stability behaviour of lightweight structures in statics and dynamics.
Path-following strategies in statics and dynamics.
Efficient and stable time integration schemes (implicit and explicit).
Reduced order models.
Nonlinear phenomena in coupled problems (e.g. magneto-electro-thermo-mechanical problems, fluid-structure interactions).
Multi-level and multi-scale analysis of nonlinear structures.
Numerical methods for the imperfection sensitivity analysis and reliable safety assessment.
Structural optimization and control considering the nonlinear behaviour.
Strongly nonlinear and coupled problems, such as multiphysics problems, play a vital role across many applications in physics and engineering. The nonlinearity in such problems generally arises from the coupling between scales across physical properties, spatial and temporal scales. Due to these reasons, the resulting nonlinear system of equations often tends to be nonlinear, non-convex, non-smooth, and highly ill-conditioned. In such cases, developing or employing efficient and robust iterative methods becomes essential. In general, the robustness and the efficiency of the iterative methods, either monolithic or alternate minimization schemes, are improved by exploiting the structure/physics of the underlying problem. In general, such iterative methods have to be tailored to the specific problem types. This mini-symposia aims to address the active research, discuss the current state-of-the-art methods in these domains, highlight emerging trends, and address problem-specific practical considerations in developing such iterative schemes. We seek contributions related to enhancing alternate and monolithic schemes, with particular focus on (but not limited to):
Linear and nonlinear preconditioning strategies
Field-split and domain-decomposition methods
Multilevel/multiscale methods
Acceleration techniques
Novel ways of enforcing coupling,
Efficient implementation, e.g., matrix-free methodologies, architectured-based implementation.
We are particularly interested in problem-specific design of iterative methods/approaches with applications from the various fields of computational mechanics, including contact mechanics, fracture mechanics, fluid-structure interactions, coupled flow in porous media, and interface problems.
0900 Verification and Validation, Uncertainty Quantification and Error Estimation
In computational mathematics and physics codes, verification and validation of the implementation and suitability of the governing equations are necessary to develop confidence in the credibility of the simulations. Verification assesses the accuracy of the numerical solutions the code produces, relative to the assumptions and expectations associated with the numerical methods.
Verification can be divided into code verification and solution verification. Code verification focuses on the correctness of the numerical-method implementation in the code (numerical-error evaluation), whereas solution verification focuses on numerical-error estimation for simulations that do not have an exact solution available. Spatial and temporal discretization are often the primary focus of code verification and are typically checked using manufactured and/or exact solutions and grid/time-refinement studies. On the other hand, grid/time-refinement studies are not the only techniques proposed in the literature to address error estimation. However, most (if not all) of the proposed techniques require data in the so-called 'asymptotic range'. Such a requirement makes solution verification troublesome in practical calculations.
Topics of interest include manufactured solutions, exact solutions, and other code-verification techniques, as well as error-estimation (solution-verification) techniques, for computational physics and applied mathematics codes.
Recent advances in computational science have resulted in the ability to perform large-scale simulations and process massive amounts of data obtained from measurements, images, or high-fidelity simulations of complex physical systems. Harnessing such large and heterogeneous observational data and integrating those with physics-based and scientific machine learning models have enabled advancing computational models' prediction capabilities.
This mini-symposium highlights novel efforts to develop predictive computational models and model-based decision-making using physics-based and scientific machine learning models. It provides a forum for advancing scientific knowledge of data-driven complex system modeling and discussing recent uncertainty quantification developments in physics-informed scientific machine learning and data interpretation algorithms. Potential topics may include but are not limited to efforts on:
Model validation and selection under uncertainty
Scientific machine learning for complex systems
Scientific machine learning to accelerate UQ analyses
UQ methods for scientific machine learning
Design, control, and decision-making under uncertainty
Computational imaging
Operator inference for model reduction and surrogate modeling
Multi-level, multi-fidelity, and dimension reduction methods
Learning the structure of the physics-based model from data
UQ methods for stochastic models with high-dimensional parameter space
Scalable, adaptive, and efficient UQ algorithms
Extensible software framework for large-scale inference and UQ
*Sandia National Laboratories is a multimission laboratory managed and operated by National Technology & Engineering Solutions of Sandia, LLC, a wholly owned subsidiary of Honeywell International Inc., for the U.S. Department of Energy’s National Nuclear Security Administration under contract DE-NA0003525.
Advances in physics-based modeling are responsible for the generation of massive datasets containing rich information about the physical systems they describe. Efforts in Uncertainty Quantification (UQ), once an emerging area but now a core discipline of computational mechanics, serve to further enrich these datasets by endowing the simulation results with probabilistic information describing the effects of parameter variations, uncertainties in model-form, and/or their connection to and validation against physical experiments.
This MS aims to:
Highlight novel efforts to (A) Harness the rich datasets afforded by potentially multi-scale, multi-physics simulations for the purposes of uncertainty quantification; and (B) Develop physics-based stochastic models, solvers, and methodologies for identification, forward propagation, and validation;
Address modeling problems at multiple length-scales, ranging from the atomistic level to the component level, for a broad class of materials (including metals, metallic alloys, composites, polymers, and ceramics).
This includes, but is not limited to, efforts that:
Merge machine learning techniques with physics-based models;
Develop physics-based stochastic models and low dimensional representations of very high dimensional systems for the purposes of uncertainty quantification;
Extract usable/actionable information from large, complex datasets generated by physics-based simulations;
Develop active learning algorithms that exploit simulation data to inform iterative/adaptive UQ efforts;
Develop stochastic solvers and sampling algorithms;
Interpolate high-dimensional data for high-fidelity surrogate model development;
Learn the intrinsic structure of physics-based simulation data to better understand model-form and its sensitivity;
Develop new methodologies for model identification;
Assess similarities/differences/sensitivities of physics-based models and validate them against experimental data.
The MS aims to span across applications of mechanics, with an emphasis placed on methodological developments that can be applied to physical systems of all types.
It has become a general understanding that uncertainties inherent in practical engineering must be taken into account in the analysis, evaluation and design of structural systems. The aim of this mini-symposium is addressing the advances on theory, methods and computationally efficient tools in the practice of uncertainty quantification, reliability analysis and design optimization of structural systems.
Topics of interest for this session include, but are not limited to:
Uncertainty modelling, quantification and propagation
Reliability analysis
Sensitivity analysis under various uncertainties
Reliability-based design optimization
Robust design and optimization
Simulation-based analysis and design under various uncertainties
Probabilistic models is a core aspect of uncertainty quantification standing at the juncture of physics and data-science. While the semantics of these models can encode logical and physics constraints, their mathematical analysis is steeped in probability theory, statistics, and data analysis. Recent technological advances in sensing and computing underlie the exponential growth of scholarship at this intersection yet challenges remain in making predictions and decisions that are sufficiently constrained by both physics and data. A large challenge in this regard is the learning of probabilistic models utilizing limited data and extracting meaningful statistical information in a mathematically rigorous fashion and to do so in a computationally efficient manner that is both generalizable, yet also domain/problem specific.
We invite submissions that deal with theoretical, as well as practical and applied, aspects of these challenges in uncertainty quantification problems. A partial, but non-exclusive list of topics of interest, includes:
Nonlinear manifold identification methods from sparse data
Sampling methods in high stochastic dimensions
Effective methods for constrained sampling
Novel generative models
Design of experiments for probabilistic learning
Application of learned generative models to uncertainty quantification in science and engineering
Probabilistic learning on manifolds
Probabilistic models and reasoning for digital twins and AI
Physics informed probabilistic models
We invite talks promoting discussion on methods for estimating epistemic uncertainties and contributions highlighting applications impacted by epistemic uncertainty. Epistemic uncertainty, i.e., uncertainty due to lack of knowledge, can be caused by diverse sources, such as, sampling uncertainty, approximations and assumptions in the formulation of a mathematical model, and lack of experimental measurement data. As a result, epistemic uncertainty can be difficult to characterize statistically because the lack of knowledge can be extreme in some cases. Bayesian inference is a powerful tool not only for updating uncertainties in model input data, but also in estimating model form uncertainty, also referred to as model discrepancy. However, established methods can be inadequate at estimating simulation uncertainties if large extrapolation from existing experimental data are required or coupled multi-physics interactions are dominant.
The focus of this mini-symposium is improved estimation of epistemic uncertainties in multi-physics simulations, particularly when alternate plausible models are available, sometimes referred to as competing narratives. Epistemic uncertainties are observed across physics disciplines, including turbulence, material fracture, climate, inertial confinement fusion, astrophysics, and materials under extreme conditions. Relevant topics in epistemic uncertainties include:
uncertainty in the correct constitutive model form,
uncertainty in the physical processes in play and the coupling of those many processes in a complex multi-physics simulation,
uncertainty in the physical response due to seemingly conflicting experiments or poorly characterized experiments,
uncertainty associated with the experiments being performed at conditions that systematically differ from the target conditions of interest,
uncertainty inherited by the requirement to model known physical processes with reduced-fidelity models or models of heterogeneous fidelity, and
extrapolation of epistemic uncertainties that are observed at lower-level experiments of a system hierarchy, for example, experiments conducted on subsystems and sub-assemblies.
This mini-symposium (MS) aims to bring together outstanding student contributions in uncertainty quantification (UQ) and related areas. The MS will be composed of presentations by six finalists selected from among the contributed abstracts and short papers. Students interested in contributing should submit an abstract to this MS through the abstract portal in addition to submitting their contribution to another MS. Student abstract contributions will be reviewed by a panel of UQ-TTA members. Up to twelve contributors will be invited to submit a short paper (maximum 4 pages). From the twelve contributions, six finalists will be selected to give a presentation in the MS. From these six finalists, one award will be made and will be announced during the congress banquet.
The requirements and process for entry are:
All contributions must have a student as the lead author and presenter.
Abstracts should also be submitted to another UQ-related MS. This means that student finalists will present their work twice, once in the traditional MS and once in the competition.
Once the abstract has been submitted to another MS, the students who wish to submit it as well to the competition should send an email to admin@conferences-usacm.org providing their name, abstract number, and title. The abstract will then be added to the UQ-TTA Student Paper Competition MS by the congress organizers.
Students will be notified at the time of abstract acceptance/decline whether they have been invited to submit a paper. Papers will be due approximately 1 month later.
Finalists will be notified approximately 1-2 months prior to the conference.
All finalists must register for the conference and present in the MS.
Advances in computational science and engineering have allowed scientists to contemplate numerical simulations of increasingly complex multiphysics and multiscale problems. However, an essential task for reliable predictions and suitable decision-making is to assess the accuracy of the predictions and design suitable adaptive strategies to control errors.
The topic of error estimation and adaptation, globally referred to as model verification, today extends far beyond classical discretization error assessment and mesh refinement. It also encompasses adaptive modeling, whose main objective is to adaptively control and enrich surrogate models derived, for instance, from homogenization techniques, model reduction, or response surface techniques. It further involves novel topics relevant to engineering applications, such as goal-oriented procedures, the assessment of errors due to the modeling of uncertainty, the control of the simulation complexity to enable real-time simulations for optimization or online systems control, model adaptation from experimental data, or error control for scientific machine learning applications.
The objectives of the mini-symposium will be to discuss the latest fundamental contributions to error estimation and adaptive methods, as well as recent developments in broad aspects of computational mechanics and applied mathematics dealing with emerging applications in which model adaptivity and modeling error control are of primal importance. We anticipate contributions on the following topics:
Estimation of discretization and modeling errors for linear, nonlinear, coupled, or time-dependent problems;
Stability, convergence, and optimality analysis of adaptive methods;
Goal-oriented approaches;
Control of hierarchical, reduced-order, and multiscale modeling strategies;
Error estimation and adaptive schemes for uncertainty quantification and optimal control;
Adaptive model enrichment from experimental inputs (e.g., full-field measurements and data assimilation);
Error estimation and control for machine learning techniques, including deep learning methods such as PINNs;
Use of adaptive techniques in the industrial context and for specific applications such as biomedical engineering or real-time system monitoring.
In practice, computational mechanics simulations involve uncertainties due to imprecise measurements, sparse data, and natural variability. UQ provides a systematic framework to quantify, characterize, and propagate these uncertainties, enabling a more comprehensive understanding of plausible model predictions. Through UQ, scientists and engineers can quantitatively gauge confidence in outcomes, perform robust optimization, assess design reliability, identify critical parameters within a model, guide future experimental efforts, and more. This, in turn, leads to risk-informed decision-making and more robust designs, making UQ an essential tool for computational mechanics in real-world scenarios.
A primary drawback of UQ is the associated computational costs. Recent integrations of machine learning (ML) with UQ have introduced novel avenues for expediting assessments and completing formerly intractable analyses. We aim to explore how the integration of these disciplines is leading to more efficient, accurate, and insightful uncertainty assessments. Topics of interest include but are not limited to the following:
Uncertainty Propagation: ML can expedite simulations, enabling tractable uncertainty propagation. Examples include active learning strategies for rare event predictions and multi-fidelity UQ methods that leverage low-fidelity ML models.
Model Calibration: Traditional probabilistic model calibration techniques such as Bayesian inference can often be intractable given expensive computational mechanics simulations. ML enables calibration through, for example, surrogate modeling or likelihood-free inference techniques.
Generative Modeling: Deep learning methods such as Denoising Diffusion Probabilistic Models (DDPM) and Generative Adversarial Networks (GANs) have been used to learn complex probability distributions from data to address uncertainty quantification challenges. The potential to extend these methods using physics informed learning is of particular interest.
Uncertainty Reduction: ML can accelerate approximation of sensitivity measures, allowing for the rapid identification of critical input parameters that drive uncertainty in model predictions. ML-assisted, optimal experimental design can then be utilized to reduce uncertainty through targeted experiments.
The collection of algorithms and methods for describing complex physical systems, such as computational fluid dynamics (CFD), is increasing rapidly and when these capabilities are combined with data from physical systems, they provide quantitative descriptions in detail and coverage that were not possible only a decade ago. However, with more experimental/simulated data and algorithms comes the challenge of assessing the reliability predictions with proper characterizations of errors and uncertainties. Thus, the issues of validation and verification in numerical simulation of complex systems are still of critical importance.
The purpose of this mini-symposium is to discuss the challenges and recent developments on uncertainty quantification and numerical error assessment and control with applications to challenging multi-scale and/or multi-physics problems. Topics will include, but are not limited to:
Assessment/control of uncertainties and numerical errors for unsteady applications
Enriched uncertainties quantification approach using data-informed methods
Adaptive approaches for error control
Novel algorithm developments for UQ and error analysis
Combined error control either using high fidelity or low fidelity (or reduced order) models.
Predictive modeling of complex dynamical systems often involves low- to mid-fidelity mechanistic models being calibrated using data acquired through field experiments or high-fidelity simulations. These calibrated models can replace high-fidelity models for outer-loop applications including design optimization and sensitivity analysis. However, the calibration data required for building a robust predictive model often only span limited sets of modeling configurations and operating conditions. Use of such data warrants careful consideration of model and data uncertainties during model calibration.
Often, a Bayesian model discovery framework is deployed that can automatically extract maximum information from sparse training data by identifying the optimal parametric structure of mechanistic models through Bayesian model comparison and producing uncertainty-aware model parameter distributions through stochastic sampling. Such a framework can help capture data sparseness, model inadequacies, and other modeling simplifications, allowing the end-user to include these post-discovery parametric uncertainties in their applications that involve running these models at conditions different from the ones captured in the field data.
The field of model comparison and Bayesian learning has significantly advanced in the last decade, mostly due to an immense interest in predictive low-fidelity representation of high-fidelity models in situations where large numbers of model runs are required. This symposium will focus on the broad field of Bayesian inverse modeling within the context of dynamical systems, with particular focus on recent advances in the field of Bayesian learning including model comparison, sparse learning, dimensionality reduction and compressive sensing.
High-fidelity computational mechanics applications are notoriously expensive which limits the number of simulations of these models that can be used for many-query tasks, e.g. uncertainty quantification (UQ). Consequently, multi-level and multi-fidelity modeling have been developed to reduce the computational burden associated with evaluating high-fidelity statistics and/or the construction of surrogates. Multi-fidelity methods increase accuracy for a fixed computational cost by optimally allocating computational resources to multiple models/data sources of varying cost and accuracy. Typically, a limited number of high-fidelity simulations are used to maintain predictive accuracy while larger numbers of lower-fidelity simulations are used to allow greater explorations of model uncertainties; this can reduce errors in estimates of uncertainty by orders of magnitude for a fixed budget.
This minisymposium will present advancements in multi-model algorithms for surrogate construction and UQ. Topics of interest include the development and/or deployment of advanced multi-fidelity tools, including inference and estimation, uncertainty propagation, experimental design, and data-driven learning. Examples of research questions of high interest include: (1) how to identify an effective multi-fidelity model ensemble?; (2) how can model ensembles be adaptively tuned or pruned to improve the multi-fidelity estimation?; (3) how can structure be identified and exploited to improve multi-fidelity analysis, e.g., by resorting to latent/shared variables?; (4) what are relationships between multi-fidelity modeling, multi-task learning, and transfer learning, and how can they be exploited?; (5) how can multifidelity UQ tools be embedded in design, optimization, and control problems?; and (6) how can multi-fidelity tools be leveraged in challenging computational scenarios like unsteady, nonlinear, and/or chaotic regimes? Moreover, talks can also focus on pragmatic strategies for overcoming problem specific challenges that arise during deployment of multi-fidelity methods to realistic mechanics models, e.g. strategies that account for the computational cost of constructing pilot samples datasets which guide the optimal allocation of resources.
Sandia National Laboratories is a multimission laboratory managed and operated by National Technology and Engineering Solutions of Sandia LLC, a wholly owned subsidiary of Honeywell International Inc. for the U.S. Department of Energy's National Nuclear Security Administration under contract DENA0003525. This paper describes objective technical results and analysis. Any subjective views or opinions that might be expressed in the paper do not necessarily represent the views of the U.S. Department of Energy or the United States Government.
Uncertainty quantification is important for the design of vibration sensitive structures. Especially for lightweight structures, which are more and more demanded to save construction material, dynamical structural analyses are required to asses the structural reliability with respect to the structural safety and the structural serviceability. Another field of application in structural dynamics is structural health monitoring, where defined dynamical loads are applied to assess the structural condition, e.g., to detect and localize structural damages. Beside high-quality numerical simulation models, which are needed to capture the physical behavior, advanced uncertainty models are required to quantify the structural loads, material and geometrical parameters as well as the boundary conditions according to available information. This often needs to consider not only aleatory uncertainties, but also epistemic uncertainties within the dynamical structural analysis, which can be solved, e.g., by finite element simulations in combination with stochastic simulations and interval or fuzzy analyses. In case of time-consuming numerical simulation models, reduced order models, surrogate models or multi-fidelity models can help to reduce the computation time.
Possible topics for contributions of the Minisymposium are:
quantification of uncertain structural loads and parameters in structural dynamics
reliability and risk assessment of vibration sensitive structures
simulation techniques in structural dynamics considering uncertainties
surrogate modeling, reduced order modeling and multi-fidelity models for dynamical analyses
structural health monitoring and inverse analysis based on dynamical models
design and optimization of structures under uncertain dynamical loading
1000 Structural Mechanics, Dynamics and Engineering
The aim of this mini-symposium is to summarize the progress in theoretical, computational and experimental research in the field of structural analysis of steel and steel-concrete composite structures. Special emphasis is always given to new concepts and procedures concerning the computational modelling, structural analysis and design of steel and steel-concrete composite structures. Topics of interest include static and dynamic analysis, fatigue analysis, seismic analysis, vibration control, stability design, structural connections, cold-formed members, bridges and footbridges, fire engineering, trusses, tower and masts, linear and nonlinear structural dynamics and soil-structure interaction. Papers of all research areas related to, theoretical, numerical and experimental aspects concerning the computational modelling, analysis and design of steel and steel-concrete composite structures are very welcome.
Despite the advent of numerous well-established algebraic techniques, infusion of the variational approach into the computational mechanics is still under way. Among those, the partitioning method and interface mechanics have played a pivotal role. They have enabled the largely distributed parallelism, incorporation of different discretization and physics, component-by-component analysis and so on.
Topics of this minisymposium will include, but not limited to, the advances in the partitioning method and interface mechanics for the variety field; parallel computing, transient analysis, reduced-order modelling, contact-impact, inverse problems, damage detection, optimization, and multi-physics analysis. This minisymposium also welcomes the infusion with data-driven method. The mini-symposium will bring the researchers together working on both fundamental and applied aspects of the computational mechanics to provide a forum for discussion, interaction, and assessment of the techniques.
The purpose of this mini-symposium is to bring together researchers from civil, mechanical, aerospace and other engineering fields and investigate a wide range of problems, related to the mechanical response of composite materials and structures using numerical methods. Papers related to modelling, design and optimization of various types of composite materials and structures are solicited.
Modern fibre reinforced composite laminates, sandwich panels, auxetics, novel metamaterials, and nanocomposite materials will be part of the mini-symposium. Traditional composite materials including masonry or concrete are also included in the topics of the symposium.
Papers on different failure types, on mechanical behaviour of structures and on material properties are welcome.
Different constitutive descriptions, including damage mechanics, plasticity, contact mechanics, etc. as well as multi-scale and homogenization methods, are encouraged. Load types include but are not limited to statics and dynamics, thermal effects and acoustic applications. Modern data-driven and physics-informed methods including machine learning solutions, which are able to exploit experimental data, are also included in the topics of the symposium.
Vibration of structures is of major importance for industrial applications. Most of the time, engineers need to control these vibrations for several reasons: comfort of users, protection of sensitive devices, fatigue of structures or energy harvesting. These vibrating structures are mostly coupled to other physics or media, leading to coupled dynamical systems. Classically, structures can be coupled to (non-exhaustive list):
Fluids, such as for vibroacoustic problems.
Other solid bodies, such as solids of viscoelastic or porous materials.
Electric devices, for example through piezoelectric patches.
These couplings can involve linear or non-linear phenomena.
Engineers thus need to have both (1) predictive and (2) efficient numerical tools in order to design such systems. This is especially the case in the context of optimization, which could require numerous computations to identify optimal designs.
The aim of this mini symposium is to gather researchers from both industry and academia in order to review recent advanced developments around two key points:
Modeling of the involved linear or non-linear dynamical multi-physics phenomena with a seek for efficient numerical strategies using for instance reduced order models of the coupled system.
Achievement of optimizations in this physical framework such as parametric, shape and/or topological optimization applied on deterministic and/or stochastic context.
The scope of the mini symposium is broad, as it includes different types of multi-physics problems; analytic and numerical methodologies for the whole optimization; and application of surrogate models. Both theoretical developments and practical applications involving dynamical systems of engineering interest are particularly welcomed in this session.
The scope of the "Symposium on Smart Structures – Modelling and Simulation" is to provide a comprehensive overview of modeling methods and computational simulation techniques for all types of so-called smart materials and structures. Special emphasis is on the scientific exchange among specialists working in the fields of structural mechanics, materials science, actuator and sensor technology, and active control of smart structures. The Symposium is focused on the methods, not on a specific field of application. Therefore, scientists from all areas are welcome. Thus, it is the purpose of the Symposium to enhance the transfer of methods and experience among different fields.
The rapid rise and continuous growth of data-driven processes, coupled with advancements in deep learning and machine learning, have led to a significant convergence across various fields in applied computational mechanics. In this context, this mini-symposium aims to explore the intersection of these cutting-edge technologies with a focus on their applications in structures, structural dynamics, and aeroelasticity.
Topics to be covered in this mini-symposium include, but are not limited to, data-driven methods, the incorporation of machine learning techniques, uncertainty quantification, and addressing inverse problems in structures, structural dynamics, and aeroelasticity. With a special emphasis on solving large-scale industry-relevant challenges, we will delve into the practical implications of these advancements in fields such as multiphysics and design optimization. The mini-symposium serves as a collaborative platform, bringing together researchers working on both fundamental and applied aspects of advanced computational mechanics based on data-driven techniques.
Non-destructivetesting (NDT&E) and structural health monitoring (SHM) are very important for quality assurance of manufacturing and in-service of various structures. The aim of this mini-symposium is to report and discuss the recent progress of using ultrasonic technologies: i) Computational modeling methods of various waves (such as guided wave, special bulk or surface waves); ii) New methods/approaches/software with advanced sensor technologies; iii) Signal processing algorithms and dmage indicators (high-order,time/frequency domains, adaptive etc); and iv) AI methods for effective ultrasonics NDT and SHM.
This minisymposium focuses on both theoretical and practical aspects concerning the transient solution of structural dynamics problems in science and engineering. Particularly, novel numerical methods and solution strategies as well as discretization schemes in space and time for wave propagation, structural vibration, structural health monitoring, coupled problems (e.g., fluid-structure-interaction) and impact problems are of interest. This includes, but is not limited to the development or the application of
isogeometric and high-order finite element methods (e.g., IGA, SEM, p-FEM, etc.),
fictitious domain methods,
meshfree methods,
mass lumping and mass scaling techniques, or
advanced time integration schemes (e.g., novel implicit and explicit time integration
schemes, implicit-explicit or asynchronous time integration schemes, sub-cycling, parallel implementation, etc.).
Furthermore, contributions dealing with large-scale, industry-relevant applications are expressly welcome.
Aging transport infrastructure presents a growing challenge for modern societies. Many of world's bridges, roads, dams, and tunnels were constructed decades ago, and they are now showing signs of wear and tear. To ensure the safety and prolong the service life of these critical assets, there is a growing need for Structural Health Monitoring (SHM) systems. SHM involves the continuous assessment and analysis of a structure's condition and performance, using a variety of sensors and data analytics. By monitoring factors like vibrations and temperature, SHM enables researchers and stakeholders to detect subtle signs of deterioration or damage at the early stage to allow for more informed maintenance and repair decisions.
Conventional SHM methods (also known as "direct") use data from sensors placed directly on the structure. Such SHM systems require complex installation, maintenance and continuous power supply. To overcome these issues, an alternative approach which relies on data from sensors mounted on a vehicle passing over a monitored structure, has evolved. Such approach is known as indirect SHM or a "drive-by" inspection. In the recent years, numerous studies have been presented to highlight the potential of the indirect SHM to gain widespread adoption and to overcome such challenges as its sensitivity to road roughness, vehicle parameters and environmental effects. Rapid development in the field is also fuelled by the fast progress in sensor and IoT technologies, smart vehicle technologies, data acquisition and transfer systems.
In this mini-symposium we invite researches to discuss the recent progress in the field of indirect SHM. Topics include, but not limited to, signal processing, machine learning methods, vehicle-bridge interaction models, system identification, data-driven and physics-based methods.
The objective of this mini symposium is to share talks related to evolution, dynamics and evolved dynamics of systems. For this purpose, we seek to analyze, debate or defend new lists of models, equations and solutions that lead to explicit, exact closed analytical solutions or eventually to analytical approximations or expansions accompanied by numerical verifications that can clarify, optimize, reinforce, or facilitate the computational implementation of problems associated with the dynamics or evolution of nonlinear dynamics of interest to the different exact, physical, natural, social and engineering sciences with possible applications in the formulation of research problems associated with the health, progress or general well-being of populations. Due to the wide range of applications and the great diversity of problems covered by this objective and its associated problems, the following area of interest are mentioned, in a non-exclusive manner: I) Conical flow, II) Turbulent flow, III) Nonlinear analysis of the interaction of systems, IV) Evolution of Landscapes in Geomorphology, V) Isolation of dynamic patterns in Biology, VI) Evolution of the Dynamics of subsystems of a Language, VII) Cratering processes associated with high energies, VIII) Models of aging populations, IX) Information flow models, X) Membrane dynamics, XI) Models of interaction of systems with environments and their evolution, XII) Degradation, etc.
Driven by the rise of innovative materials such as non-metallic reinforced components, there is a general trend to rethink the design of structural buildings and allows conceiving the construction of those. Inspiration from biology is leading to new ways to build lightweight, resource-efficient components with optimized strength. Compared to their conventional steel counterparts, these reinforcements offer superior flexibility in geometric shape, while at the same time offering higher corrosion resistance. Especially in the civil engineering context, carbon reinforced concrete (CRC) emerges as a versatile and sustainable material option for a wide range of applications.
The construction with CRC necessitates the utilization of advanced simulation techniques for structural analysis. On the one hand, micromechanical mechanisms have to be investigated which take into account the material behavior of the components and their interaction. On the other hand, simulation methods for the analysis of curved, lightweight construction elements need to be developed. At all levels of analysis, a high degree of accuracy and efficiency is required to exploit the material properties to the optimum.
Topics of interest include (but not limited to)
Material modeling of the inelastic behavior of carbon reinforced concrete
Methods for the analysis of debonding, fracture and failure of CRC
Simulation methods for thin, curved structures
Homogenization, multi-scale modeling and mixed-dimensional substructure modeling
Model order reduction techniques in quasi-static, inelastic analysis
This mini-symposium aims to convene researchers specializing in the field of modelling structural concrete and to provide a platform for the exchange of interdisciplinary knowledge. The primary focus is on advanced simulation techniques tailored for carbon reinforced concrete. Nevertheless, the mini-symposium welcomes contributions involving techniques that either exclude reinforcement or incorporate alternative reinforcement materials.
1100 Manufacturing and Materials Processing
Various additive manufacturing (AM) techniques including 4D printing have been developed to manufacture complex-shaped components with well-controlled precision. Sophisticated AM techniques often require systematic modeling and simulation efforts during the design stage and for the purpose of part qualification/certification. The objective of this minisymposium is to provide a platform to discuss recently developed modeling and simulation techniques for AM, including experimental calibration and validation efforts for the process. The topics include (but are not limited to):
Simulation of the manufacturing process to predict heat transfer, residual stress/distortion, surface topology, composition, and microstructure including defects at multiscale length and time scales
Data-driven approaches for simulation acceleration
Combined simulation and in-situ monitoring for rapid build qualification
Effects of microstructure and defects on mechanical properties
Feedback control for minimizing defects and residual stress in as-built structures
AM-oriented topology optimization
Analysis of lattice and cellular structures
Modeling and simulation of functionally graded materials, tissue engineering scaffolds, bioinspired composites, bi-material joints, etc
Computational modeling and simulation for any AM processes (e.g. laser power bed fusion, electron beam melting, form deposition modeling, stereolithography, binder jetting) and materials (e.g. metals, plastics, ceramics and their composites as well as biological materials) are welcome.
Given the increasing potential for unforeseen or disruptive technologies, evolving risk to supply-chains, and other environmental, health, and global uncertainties, we are faced with the need for a more competitive, adaptive and resilient manufacturing infrastructure. At the same time, there is burgeoning desire for “faster-better-cheaper” complex parts as well as on-demand bespoke (i.e., small-lot) products. These drivers have motivated renewed efforts to understand, predict, and control manufacturing processes leveraging a hierarchy of physics-based modeling and simulation approaches integrated with digital technologies, and data analytic tools. In particular, the advent of new manufacturing techniques based on additive manufacturing process routes, the potential of functionally graded composite, nano-structured and novel materials by design. Additionally, the potential for leveraging data via the “network of things” and machine-learned models for integrated AI controls and process optimization hold out promise for a new era of industrialization that is robust, responsive, and “smart”. This Minisymposium seeks to boost modeling and simulation efforts for improved physical insight, to guide experimentation, and to reduce build-test cycles in reasearch labs and in the industry. It aims to provide a forum to present recent advances in model-based and digital manufacturing methods and approaches. Topics include but are not limited to:
Multiphysics modeling and simulation of manufacturing processes
Fluid, thermal, and solid computational models
Models and to gain physical insight into microstructure and defect formation processes and mechanisms
Data-centric and machine-learned/Artificial Intelligence models for automation
Physics-informed neural networks
Reduced-order models for process control
Digital twins
Image-based simulation for digital inspection and acceptance
Experimental validation of simulations
Experimental investigation of process monitoring and control
Fundamental investigations on the structure–process–property relations
Hierarchical coupling of process models for digital design, control, and qualification of manufacturing process-structure-property relationships of as-built parts
Various advanced manufacuring techniques for metallic materials have been developed to produce complex near net-shaped parts. Representative examples include solid state joining (e.g. friction stir and ultrasonic additive manufacturing), powder bed fusion additive manufacturing (e.g. laser and electron beam-based), wire-based additive manufacturing (e.g. laser and wire-arc DED), and hybrid methods that include both additive and subtractive modalities in an integrated manner. Efficient process optimization for these advanced manufacturing methods often requires the use of multi-scale and multi-physics computational models. This symposium seeks to provide a forum for discussion of the latest methods and developments in the modeling and simulation community as applied to process modeling and optimization for advanced manufacturing of metals. Contributions on process modeling of any advanced manufacturing method for metallic materials are welcome. Topics of interest include, but are not limited to:
Thermo-mechanical modeling of advanced manufacturing processes to predict residual stress and distortion.
Multi-scale and multi-physics modeling approaches.
Establishing microstructure-property linkages through computational homogenization.
Simulation guided process and/or topology optimization, especially through machine learning approaches.
Model verification and validation and uncertainty quantification.
Adaptive spatio-temporal discretization strategies.
Data-driven and reduced-order modeling.
Functionally and/or compositionally graded materials processing.
Microstructure and defect predictions and their influence on material properties.
Computational mechanics has been playing a vital role in developing effective modeling, optimization, and monitoring tools for additive manufacturing. This mini-symposium (MS) aims to provide a platform to discuss the recent advances in computational mechanics for additive manufacturing. Specific topics of the MS include (but are not limited to)
Multi-scale and multi-physics AM process modeling
Modeling and simulation for microstructure evolution, phase transformation, and defect formation
Data-driven modeling techniques for model integration and material design
Topology optimization
Uncertainty quantification in AM process and materials properties prediction
As metal additive manufacturing (MAM) increasingly gains traction for making mission-critical parts with a wide selection of alloy compositions, the time is ripe to take advantage of the design freedom it enables. In particular, the ability to deposit graded material compositions opens the door to the creation of novel computational design approaches that combine design space exploration (DSE), geometric and physical modeling, and material informatics (MI), to enable optimal co-design of MAM-able part geometry and material properties. Most of the traditional shape/topology optimization (SO/TO) approaches treat material selection as a separate activity, with available properties often pre-determined based on already qualified materials for traditional processes (e.g., metallurgical casting and forging). While MAM opens up an enormous design space with complex shapes and new alloys (with potentially graded composition and properties) that were not possible before, it also presents challenges due to intertwined shape, manufacturing process, and as-built properties. Early multi-material TO algorithms that allowed for voxel-level material composition design either overlooked or oversimplified metallurgical considerations such as alloy compatibility and heat treatment effects. Material design and discovery (e.g., based on CALPHAD and experimental characterization), on the other hand, operate independently of geometric part design, how it may benefit from new properties in different spatial locations to achieve better function, and how it may be constrained by MAM process (e.g., shape determined heat transfer, which, in turn, affects as-built material properties).
This minisymposium aims to present opportunities and challenges in using computational design tools that treat spatial distribution of material properties as an explicit design variable, alongside shape and topology, navigating inherent tradeoffs among geometric and metallurgical feasibility, interfacial compatibility, and additive or hybrid manufacturability. Multi-disciplinary approaches using model-based geometric and physical reasoning, integrated computational materials engineering (IMCE), and data-driven MI are encouraged. Recent advances in machine learning and AI present unique opportunities for DSE, MI, surrogate modeling for rapid geometric and physical reasoning within DSE, and for their efficient integration, and hence fall within the scope of the minisymposium. Also in scope are methods that present partial solutions to this challenge.
Additive manufacturing (AM), also known as 3D printing, is a revolutionary manufacturing process that fabricates continuum objects layer by laser using digital models or computer-aided design (CAD) data. Several different AM technologies are available, each with its advantages and limitations, making them suitable for different applications and industries. However, the wide industrial adoptions are hindered by the lack of comprehensive understanding of the relationship between the complex manufacturing process, material microstructure formation, mechanical properties, and performance. Using trial-and-error experiments is time-consuming and costly to obtain the process parameters for geometry and properties as expected. Therefore, this mini-symposium (MS) is aimed at providing a platform for mechanicians, computer scientists, and industrial researchers to discuss and share numerical simulation methods and machine learning based computational models for AM process, microstructure, property, and performance to advance the fundamental understanding and to guide further the process parameters optimization as well as the scientific exchanges among scientists, practitioners, and engineers in affiliated disciplines.
The topics of interest are, but not limited to:
Multi-physics multi-scale numerical simulation methods for AM
Reduced order methods for AM
Data-driven based computation for AM
Digital twin for AM
High fidelity numerical modelling for AM process
Grain structure evolution modelling for AM
Mechanical properties predictions for AM
Thermoelectric magnetohydrodynamics and electrohydrodynamics in field-assisted AM
Melt pool dynamics in AM
Multi-phase flow and interface evolution in AM
Topological optimization for AM
Numerical modelling for fracture and fatigue in AM
Artificial-Intelligence for Science in AM modelling and simulation
1200 Atomistic, Nano and Micro Mechanics of Materials
The advancing of nanotechnology has enabled the fabrication of high-performance functional materials and the continuing miniaturization of mechanical devices or systems. To facilitate the manufacturing and applications of nanoscale materials, it is vital to understand their mechanical properties. There are have been extensive experimental, theoretical, and computational efforts at atomistic scale to understand the mechanical behaviours of nanomaterials. Particularly, for the novel structures with extreme small dimensions like the ultra-thin diamond nanothread [1], the atomic simulations have provided useful guidelines for the experimental study.
Besides the mechanical properties, the thermal transport of the nanomaterials is another fundamental characteristic that determine their usages [2]. Depending on the application, materials are required to have a high thermal conductivity or a strongly suppressed thermal conductivity. For instance, for energy saving in both residential and commercial buildings and thermoelectric devices, there has been a continuing search for high performance materials with a low thermal conductivity. In comparison, a high thermal conductivity is required for the electronic packing to enable efficient heat removal and transfer.
The diversity of low dimensional nanomaterials has provided a great potential to construct novel nanostructures with required mechanical and thermal performance. This mini-symposium intends to bring the recent progress on atomistic simulations for the mechanical and thermal transport properties of nanomaterials, which serve as effective tools to guide experiments or predict novel nanomaterials.
REFERENCES
[1] Zhan H, Zhang G, Bell JM, Tan VBC, Gu Y . High density mechanical energy storage with carbon nanothread bundle. Nat. Commun. 2020; 11(1): 1905.
[2] Zhan H, Nie Y , Chen Y , Bell JM, Gu Y . Thermal Transport in 3D Nanostructures. Adv. Funct. Mater. 2019; 30(8): 1903841.
Voids are ubiquitous in all engineered and naturally occurring materials whether intentionally introduced or intrinsic to a material. They can be found at all length scales: atomic, nanometer, micrometer and macro-scale. Just as the control of materials is reaching ever-smaller dimensions, so too is our control of intentional voids. We can routinely engineer voids into materials down to the single nanometer size range and through chemistry, create sub-nanometer voids with covalent organic frameworks. As a material design criterion, these empty spaces can negatively affect the mechanical response but can also lead to a variety of interesting properties: electrical, thermal, chemical, bioactivity, etc. The designing of materials with voids simultaneously on multiple length scale is known as hierarchical porosity and is key to achieving multifunctional materials and a globally optimized system or part. Hierarchical design is becoming widely used to reduce part count, and overall system mass. In addition to the initial configuration and void content, aging and other time related phenomena can introduce additional voids or change the characteristics of the original voids in the material leading to a time-dependent mechanical response. This 4th dimension of time-dependence and hierarchical porosity are just two examples of the challenges that face mechanical modelling of materials.
This is the second Modeling Mechanics of Materials with Voids minisymposium as part of WCCM. The first was held in 2022. This minisymposium will explore the mechanical response of materials that are conceptually “built” from the ideal properties at the atomic scale to the “real” properties at macro-scale, by introducing voids/defects at the appropriate length scales along the way. Whether crystalline (metal and ceramic), network (polymer and glass) or composite materials, the mechanical response can be viewed as the contributions that voids and defects at the various length scales have on the ideal material properties. We invite submissions on modeling the effects of voids in materials at any length scale on material behavior including mechanical, thermal, chemical, biological, and electrical properties and function. While this minisymposium is listed under the focus area of Atomistic, Nano, and Micro Mechanics of Materials, papers are invited from other focus areas including, but not limited to, Biomechanics and Mechanobiology (500), Materials by Design (600), Fracture, Damage, and Failure Mechanics (200), Fluid-structure Interaction, Contact and Interfaces (1600), and Fluid Dynamics and Transport Phenomena (700).
1300 Modeling and Analysis of Real World and Industry Applications
In order to analyze real phenomena such as social, environmental, and disaster prevention problems, it is necessary to develop appropriate mathematical models of phenomena and develop simulation methods in parallel. The purpose of this session is to exchange opinions on modeling methods and simulation methods. We expect the participation of many researchers who are interested in related fields.
Isogeometric analysis (IGA) was originally introduced to achieve seamless integration of computer-aided design (CAD), computer-aided engineering (CAE), and computer-aided manufacturing (CAM). Many IGA technologies have undergone significant advancements since their introduction, including the development of splines that are simultaneously suitable for CAD, CAE, and CAM and the use of spline-based immersed approaches. IGA and its extensive applications continue to evolve as these methods transition from academia into industry. This minisymposium will feature a broad representation of industrial results and IGA research projects, including presentations from academics consulting on industry projects, software vendors, end users, and academics working on large-scale parallel implementations of IGA.
The main focus of this Mini-Symposium is on the discussion of modeling and simulation dynamics, stability and control of aerospace structures (such as airplanes, drones, helicopters, rockets, satellites etc.), and how these problems can be understood and solved in view of numerical, computational, theoretical and experimental approaches. Contributions pertaining to any class of mathematical problems and methods associated to dynamics, stability and control of aerospace structures will be welcome. It will also be welcomed experimental investigations of these problems to validate mathematical and numerical models. We will also contemplate work on reliability of this kind of structures.
Society5.0 aims to create a human-centered society that balances economic development and social problem-solving through CPS systems that highly integrate cyberspace and physical space. In this Mini-Symposium, we will consider the construction of the Digital Twins that accurately model the real world, which is necessary to realize such CPS. To do this, we will incorporate the uncertainty and complexity inherent in humans and society into the Digital Twins, and further consider the integration of human knowledge (scientific knowledge, experiential knowledge, tacit knowledge). We call these realized Digital Twins as “Extended Digital Twins.” We will gather presentations for the computational information science infrastructure, new mathematical models, and AI approaches that are necessary to achieve them.
This minisymposium is focused on computational methods for solids and structures subjected to extreme loads, such as shock and high-speed impact. A broad area of contributions is sought to include numerical modeling of both the prediction of severe loads and subsequent dynamic response, which may include the coupling of multiple areas of computational mechanics. Typical contributions to this forum might come from defense, construction, petroleum, mining, space, or counterterrorist and law enforcement applications. The use of numerical simulation for weapon-structural interactions has seen significant growth in recent years, primarily due to increasing computational accuracy, improvements in computing hardware, and the greater expense of testing. The development of new technologies relies on modeling impact, penetration, and explosive effects for weapon effectiveness and structural-damage evaluation to vehicles, body armor, and protective structures. Recent Lagrangian higher-order finite element, isogeometric, and meshfree methodologies enable analysts to look at old problems more easily and in new light, while Eulerian and ALE approaches remain essential to the simulation of air blast and explosive detonation. New multiscale and machine-learning approaches for constitutive modeling can provide greater fidelity for complex material responses. In addition, assessment of force protection and terrorist threats to government facilities and civilian infrastructure now frequently incorporates computational mechanics for blast-structural modeling. This is particularly true for cases, such as large buildings, dams, or bridges, where full scale testing of a threat is not feasible, but it is also important for post-event structural-integrity assessments in standard construction. Oil, mining, and construction operations such as drilling, excavation, demolition, explosive anchor driving, and disaster-protection and recovery/damage assessment can also utilize such technologies, and the modeling of impact has become important in aircraft and spacecraft design. The nature of all these applications typically involves some of the most challenging aspects in structural mechanics: nonlinear material behavior under large strains and/or high strain rates; failure and dynamic fracture; initiation, burning, and detonation of energetic materials; phase change and transition; and high-velocity contact and friction.
The purpose of this mini-symposium is to provide a forum for technical presentation and exchange, establish communication and collaboration between academic, government and industrial software researchers in the field of computational mechanics for extreme-loading applications. Papers dealing with theoretical developments, multi-spectral physics coupling, new higher-order and isogeometric element technologies, meshfree modeling, algorithms and numerical methods, implementation and parallel computational issues, Exploitation of GPU programming, new constitutive modeling, experimental validation, and practical applications are all welcome.
With the evolution of IoT and AI, digital twins that replicate real-world objects and environments in virtual spaces using data collected from the physical world have gained attention. Particularly for rare events like large earthquakes, leveraging numerical simulations becomes crucial due to scarce actual data, allowing the construction of digital twins to enhance disaster preparedness and mitigation.
To conduct disaster simulations, information on the structures within the targeted city is necessary. In recent years, many cities have been actively developing data for 3D urban models, and simplified urban simulations utilizing this data have been performed. On the other hand, efforts have been made to manage detailed information for individual structures using BIM/CIM which can be utilized to create detailed models.
The information required for disaster preparedness and mitigation efforts varies depending on the stakeholders involved. From the administrative perspective, there is a high demand for regional-level damage prediction information. From the residents' standpoint, detailed predicted information on individual buildings is considered crucial. Given the differing levels of detail in obtainable data and the diverse information needs of each stakeholder, the flexible selection of heterogeneous simulations according to the situation is desired. Furthermore, to facilitate information sharing and communication among different stakeholders, an integrated simulation spanning from living spaces to urban scales is needed.
In this mini-symposium we discuss the recent advancements in numerical approaches for integrated disaster simulation, spanning from individual living spaces to urban scales. We welcome topics on numerical modeling using BIM/CIM or 3D urban models, numerical simulations ranging from detailed simulations of buildings, including furniture and non-structural components, to city-scale simulations. Additionally, we are interested in technologies that holistically manage heterogeneous simulations through integration with GIS. We also invite discussions on technologies to reduce computational cost of detailed simulations, such as surrogate models, and methods to assess disaster risk and estimate damages based on numerical simulations.
On the path towards predictive modeling via digital twins, the need for rapid, accurate, and reliable information comes to the forefront. Despite the increase in the available computational power, standard discretization techniques still struggle to provide viable solutions fulfilling industry time constraints. This is particularly critical during conception, design, and operation of complex systems, where the need for accurate many-queries applications and real-time response arises. This minisymposium focuses on recent trends, innovative methodologies, and algorithms for the efficient construction and execution of digital twins describing complex, potentially large-scale, systems and processes, spanning from data assimilation to uncertainty quantification, optimization, monitoring, and control. Methodologies of interest include different types of surrogate and reduced-order models (based on projection approaches and machine learning techniques), nonlinear dimensionality reduction, and multi-fidelity models, with a particular emphasis on hybrid approaches that incorporate both physics and data assets, towards interpretable artificial intelligence. Relevant fields of applications include but are not limited to, fluid-structure interaction, nonlinear mechanics, turbulent flows, compressibility, multi-phase interfaces, and heat exchange. Contributions featuring digital twins for (large-scale) industrial problems are particularly welcome.
Wind energy industry is continuously advancing toward massive size wind turbines and offshore installations, on deeper waters far from the shore. As a result, engineering processes -- including design, testing, manufacturing, and O&M of these systems -- increasingly depend on accurate and efficient simulation methods. This minisymposium aims to present and share the latest knowledge and advancements in the application of computational methods to wind energy. We welcome presentations covering topics such as multiscale and multiphysics simulations for the wind and atmospheric boundary layer, both floating and fixed-bottom offshore wind energy, onshore and not-conventional wind energy applications, dynamics of large wind farms systems, environmental impact, as well as fluid and solid mechanics and fluid-structure interactions of turbines and sub-structures.
1400 Inverse Problems, Optimization and Design
Motivated by key advances in manufacturing techniques, the tailoring of materials with desired macroscopic properties has been the focus of active research in engineering and materials science over the past decade. For materials architected at length scales that can be controlled by the manufacturing process, the goal is to determine the optimal spatial layout of one or more constituent materials to achieve a desired macroscopic constitutive response. Topology and shape optimization methods provide a systematic means to achieve this goal. The objective of this symposium is to bring together researchers working on state-of-the-art topology and shape optimization techniques with direct application in materials design to exchange ideas, present novel developments, and discuss recent advances. Topics of interest concern shape and topology optimization techniques, and they include, but are not limited to:
New topology and shape optimization algorithms
Topology and shape optimization for additive manufacturing
Machine learning-assisted, data-driven, and surrogate-based topology and shape optimization
Multiscale, multifunctional, multi-objective design of materials and structures
Multiphysics and multidisciplinary optimization
Stress-constrained topology optimization
Reduced-order multiscale modeling for design
Simultaneous material and structure optimization
Optimization under uncertainty
Design of architected materials
Design of nonlinear materials
Bioinspired design of composites
Design of metamaterials
Smart material design
Software
The design process in engineering applications is currently experiencing a change in paradigm, away from experience-based design to numerical design. In many such engineering applications, flows of complex fluids are encountered, posing the challenge of understanding, describing, computing, and controlling these flows. In this spirit, this mini-symposium aims at providing a forum for questions concerning both numerical and optimization methods specific to fluid flow. On the modeling side, it covers the issues related to complex, non-Newtonian flow phenomena, such as the choice of model or appropriate stabilization. Furthermore, in the area of simulation, novel numerical methods, ranging from discretization methods to both free-boundary problems and deforming domain problems, are considered. In all cases, the flow solution may serve as the forward solution of a shape optimization problem. To this end, this mini-symposium will cover novel techniques for shape representation as well as new methods for an efficient evaluation of the design.
Topics of this mini-symposium include, but are not limited to:
Non-Newtonian fluid models describing shear-thinning or viscoelastic properties.
Simulation methods, including stabilization schemes, interface capturing, and interface tracking.
Methods related to shape optimization in fluid flow, in particular geometry representation, reduced order models, and development of objective functions.
Methods particular to specific applications.
Keywords
Non-Newtonian Fluids, Moving Boundaries, Shape Optimization, Model Order Reduction.
This mini-symposium aims to bring together researchers working on various aspects of topology optimization applied to fluids, solids and structures. In particular, we are interested in recent advances in topology optimization. Suggested topics include, but are not limited to:
Novel and efficient topology optimization algorithms
New methods to handle manufacturing, stress and other constraints
Exact solutions to topology optimization problems
New methods to solve multi-objective topology optimization problems
Recent advances in reliability-based topology optimization (RBTO)
Efficient solution of industrial large-scale topology optimization problems
Inclusion of microstructure in topology predictions
Recent advances in topology optimization applied to multi-physics problems
Exploiting high-performance computing in topology optimization considering parallelism by CPU and/or GPU
New methods of adaptive mesh refinement in topology optimization
Multiscale topology optimization
Topology optimization applied to fluid problems
Accurate modeling of solid materials requires careful calibration of appropriate constitutive models, which typically requires the solution of an inverse problem to determine model parameter values that yield the closest match to an observed response. The ability to obtain accurate, credible results from simulations requires the calibrated model to be valid throughout potentially large regions of the state and parameter spaces, the extent of which are not generally known during calibration. Thus, proper use of material constitutive models requires that 1) a calibration produces a set of model parameters that is optimal in some sense, and 2) the fitness of the model and parameter values are assessed with respect to a scenario which is generally not fully specified prior to calibration.
Several significant challenges arise in this context, including the following: It is difficult in general to define objective functions that are smooth and convex with a unique global minimum, so local optimization techniques can be inadequate. Rigorous validation of a calibrated model for its intended use is often time-consuming and complicated. Evaluations of the objective function and its derivatives at each iteration of the optimization process typically requires the solution of an expensive forward problem. Probabilistic (e.g. Bayesian) calibration methods suffer from high computational costs and the curse of dimensionality.
For this minisymposium, we are soliciting contributions that address these challenges. We are particularly interested in research that targets one or more of the following topics: 1) physics-constrained optimization in the context of model calibration; 2) machine learning and associated techniques that generate surrogate or reduced-order models for increased computational efficiency; 3) methods that provide uncertainty quantification for model parameters; 4) approaches that address multiphysics, multiscale, and/or multi-fidelity aspects of model calibration; 5) techniques that leverage full-field data.
Topology optimization provides a powerful tool for innovative design of materials and structures with exceptional mechanical properties, which can be realized through additive manufacturing technology. This mini-symposium aims to address challenging issues in modelling, numerical methods and applications of topology optimization in the context of additively manufactured metamaterials and structures. Topics of interest include but are not limited to: manufacturing constraint modeling, multi-scale and multi-physics optimization for metamaterials, optimization of functional properties (thermal, acoustic, fluid, etc.), incorporating material microstructures into topology optimization, structural and multidisciplinary applications, multi-material topology optimization, modeling of manufacturing defects, manufacturing uncertainty quantification and robust design.
The development of topology optimization techniques has revolutionized the way we design and optimize structural and fluidic systems, and with the help of powerful computational resources and advancements in numerical algorithms, researchers can tackle large-scale, multi-physics optimization problems in a more efficient way. However, the optimization of large-scale structures and fluidic systems presents unique challenges due to the high computational cost and the need for efficient numerical algorithms. Additionally, incorporating thermal-fluid-mechanical coupling effects into the optimization process remains a great concern to researchers, as it requires accurate modeling of multiple physics and consideration of complex material-structure behaviors.
To address these challenges, we invite submissions on a wide range of topics related to large-scale structural and fluidic topology optimization for thermomechanical problems. Some potential topics of interest include, but are not limited to:
Efficient numerical methods for large-scale topology optimization
Parallel computing methods for large-scale simulation and optimization
Multi-scale and multi-material topology optimization
Topology optimization with thermomechanical effects
Topology optimization for fluid-structure interaction problems
We would like to invite scholars and researchers from academia and industry to contribute actively to our mini-symposium by submitting their latest research findings and ideas.
Computational Mechanics methods have created an impressive amount of techniques for finding optimal solutions to design problems, usually in the form of geometric structures. Designs are driven by the fundamental physics of the system, such as heat or electricity conductance, aero- or hydrodynamics, elasticity, self-assembly, chemical reactions, or photonic and phononic dynamics. Computationally, design problems are frequently addressed in a nested fashion: the inner loop solves the physical dynamics within a design and computes the objective function, while the outer loop modifies the design parameters to optimize the objective function subject to limited resources. Mathematically, this approach informs what kind of solution is optimal, but does that optimum address the original design problem?
In this minisymposium we invite contributions that study optimal design of any physical system and ask a broader set of design questions on the boundary of mathematical optimization. Why do particular features, or design rules, become a recurrent motif in solving a family of similar design problems? What if the directly optimized objective function is a poor proxy for the desired design outcome? How much does one need to modify the mathematically "optimal" design to manufacture it? How does the "optimal" solution evolve as the resource budget is continuously adjusted? How does the space of considered design solutions evolve during the design process? How many similar or distinct solutions reach the same value of the objective function? These questions often cannot be addressed within the nested loop of design optimization and instead lie on the boundary between mathematical formalism and qualitative questions.
Computational design via shape optimization (SO) has been widely explored over the last few decades to solve problems in which boundary and interface phenomena are critical for accurately representing the physical response. SO plays a pivotal role in traditional engineering design applications, where it has been leveraged to improve the performance of structures. In addition, recent advances in additive manufacturing of architected materials have greatly increased design freedom and enabled multifunctional features over traditional structures. SO must now be applied to solve these more complex design problems that involve a combination of complex physics and intricate parameterizations. This will require SO strategies for precise shape control, and large-scale computing, among other considerations. Hence, this mini symposium aims to bring together researchers from diverse backgrounds to not only showcase innovative shape optimization techniques but also encourage collaborative discussion on challenges encountered in multidisciplinary fields where shape optimization has found successful applications. We invite contributions with a focus on but not limited to the following topics:
Design of lattice metamaterials.
Smart/active/responsive material design.
Optimal design of energy systems.
Multi-physics, multiscale, multifunctional design.
Manufacturing constraints in shape optimization.
High-performance computing in shape optimization.
Reduced-order multiscale modeling in shape optimization.
Simultaneous material and shape optimization.
Implicit and explicit shape parameterization of engineering systems.
Generative (and AI-aided) design of shapes.
Optimization under uncertainty.
This work was performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344.
1500 Software and High Performance Computing
In the PSE concept, human concentrates on target problems and works out solutions, and a part of application of solutions, which can be solved mechanically, is performed by computers or machines or software. PSE provides integrated human friendly innovative computational services and facilities for easy incorporation of novel solution methods to solve a target class of problems.
PSE is an innovative concept to enrich our e-Science, e-Life, e-Engineering, e-Production, e-Commerce, e-Home, etc. The PSE-relating studies were started in 1970's to provide a higher-level programming language than Fortran, etc. in scientific computations. The trend at that time was natural to deliver more human-friendly programming environment, and was resulting in PSE, CAE (Computer Assisted Engineering), libraries, etc.
At present PSE covers a rather wide area, for example, program generation support PSEs, education support PSEs, CAE software learning support PSEs, Grid/Cloud computing support PSEs, job execution support PSEs, e-Learning support PSEs, etc.
In ths MS at this conference, talks and presentations about these topics will be discussed.
Recent advances in computational hardware have provided opportunities to diminish resolution constraints and increase computing speed in developing large-scale numerical models. However, utilizing the hardware at its maximal capabilities still remains a challenge. Developing algorithms that optimally leverage the hardware capabilities with performance-portable implementations, enabling the same code to execute correctly and efficiently across a wide range of CPU and GPU architectures, has therefore become an essential aspect of numerical modeling.
This mini-symposium will feature presentations on algorithms that optimally leverage the hardware capabilities with performance-portable implementations for unstructured mesh applications. Relevant topics include algorithms applied to high spatial and/or temporal resolution large-scale numerical models.
1600 Fluid-Structure Interaction, Contact and Interfaces
In structural mechanics, the role of interfaces plays a fundamental in many problems. These interfaces may be physical in form, such as contact/impact and fracture/crack interfaces, or numerical in form, such as immersed boundary/embedded and domain decomposition. Although a lot of progress has been achieved, the research field is still growing and diversifies in many directions.
This session is devoted to recent developments on the various aspects of contact mechanics:
Interface behaviour: unilateral contact, friction, adhesion, viscosity, fretting, wear, peeling, rolling contact, contact in biomechanics, fluid flow in contact interface.
Computational models: multilevel approaches (molecular and nano-micro-macro models), multi-physics (thermo-piezo- …), coupled multi-field formulations, fractal surface characterization, homogenization, bi-potential.
Computational methods: fast solvers, multi-grid, isogeometric analysis, NURBS, virtual elements.
Dynamics of structures and of rigid bodies, instabilities.
Discretization methods for overlapping immersed and embedded meshes.
Mathematical progresses.
Industrial applications involving interface and contact conditions.
Besides presentations of new results and new contributions to the understanding of contact mechanics, this session will provide an opportunity to discuss and exchange ideas on the various topics related to contact mechanics in science and engineering.
Dynamics of the interface, like deformation and reaction, play an important role in biology like cell aggregation, and industry like water-proof material. Modeling and simulation of the dynamics of the interface are challenging since multiphase-flow and multiphysics fields are evolved. Recently, machine learning-based methods like Neural networks are introduced to solve the obtained nonlinear coupled system more efficiently. The purpose of this symposium is to bring together researchers working on modeling, theory, and numerics for interface problems, to share the latest advances in the field, and to provide a forum for joint collaborations.
The proposed minisymposium aims to bring together experts with diverse backgrounds in the construction and analysis of novel discretization techniques for multiphysics models. One example is the intricate interplay between fluid and poroelastic structures arising in a vast diversity of applications in biomedicine and engineering. Our session focuses on the rigorous analysis of solvability and stability of saddle-point and nonlinear problems, a priori and a posteriori error estimation, as well on the design of robust solvers. The session also provides a platform for contributions that delve into the application of cutting-edge methodologies (including computational approaches leveraging on neural networks and learning algorithms to accelerate and generalize established methods) in the solution of coupled models arising in, e.g., brain tissue dynamics, blood and arterial wall couplings, cardiac electromechanics, geophysical flows, filter design, and other types of fluid-poromechanical interaction multiphysics problems.
This symposium will bring together researchers from the engineering community to discuss their work on computational fluid dynamics (CFD) and fluid-structure interaction (CFSI). The symposium will cover both computational methods and engineering applications. Topics will include but not limited to theoretical developments, novel computational frameworks, new discretization methods, high-order approaches, moving-mesh methods such as the arbitrary Lagrangian-Eulerian (ALE) and space-time (ST) methods, isogeometric analysis (IGA), FSI coupling strategies, Eulerian and ALE hydrocodes, high-performance computing, and applications to complex problems in engineering, science, and medicine. Recent trends in machine learning techniques for CFD and CFSI will also be of interest in this symposium. The symposium will provide a venue for researchers from both academia and industry to discuss the most recent advances and emerging research directions in this field.
Fluid-structure interaction (FSI) development is fundamental in the analysis of boundary and interface problems in science and engineering. This mini-symposium welcomes topics that overcome relevant challenges in the field and advance the feasibility of simulation-driven applications involving interfaces. Works on complex behavior at the interface in biological, mechanical, aeronautical, and civil engineering FSI applications are encouraged. Contributions in the scope of this gathering include, but are not restricted to, novel computational frameworks, new discretization and high-order approaches, theoretical developments, phase-field and advanced interface capturing techniques, coupling strategies, Eulerian and arbitrary Lagrangian-Eulerian (ALE) hydrocodes, and high-performance computing. Presentations focusing on thorough methodology comparison and implementation details of new algorithms and methods are relevant to this meeting. Recent trends in Machine Learning techniques for accelerating interface FSI problems in engineering are also of interest in this mini-symposium. We aim to bring together experts from academia and industry to foster a collaborative environment of exchange and discovery and discuss the most recent advances and research directions in FSI.
Phase-field modeling offers a powerful and unified treatment of evolving physical interfaces with topological changes on fixed domains. This mini-symposium focuses on the computational aspects of phase-field modeling for multiphase and multiphysics simulations. From a computational mechanics and applied mathematics standpoint, phase-field models pose numerous challenges, namely accuracy, conservation, stability, and proper handling of interface profiles and geometry. We invite participation in novel phase-field formulations and computational methods for interface problems found in, but are not limited to, multiphase flows, fluid-structure interaction, solid-to-solid contacts and rupture phenomena, and interactions with other physical fields. This mini-symposium aims to provide a platform for investigators to disseminate and discuss phase-field-based approaches for wide-ranging multiphase/multiphysics problems in numerous emerging and traditional engineering applications. Contributions to software implementation, parallel computing, acceleration techniques, reduced-order modeling, a posteriori error-control, mesh adaptation, post-processing and visualization techniques are encouraged.
This minisymposium (MS) provides a forum for the presentation, discussion, and dissemination of state-of-the-art computational modeling approaches for coupled mechanical systems. The MS will include novel FSI modeling approaches and numerical methods for the simulation of a variety of applications, including but not limited to Eulerian-Lagrangian contact mechanics, biomechanical FSI, blast-on-structures, cavitation-induced damage, and fluid-thermal-structure interaction. Other areas of interest for this MS include applications of FSI to all scales, communication of software implementation details, performance evaluation of original and commercial codes, benchmark problems, and additional verification and validation schemes.
The mini-symposium focus on advances in computational fluid-structure interaction (FSI) problems. The presentations will cover a wide range of applications, including aerodynamics, renewable energy (e.g. wind turbines, land and airborne, wave energy converters, hydro-power), biomedicine, aerospace and aerodynamics, civil engineering (bridges and buildings).
The topics to be discussed include:
Partitioned and staggered methodologies
Embedded and Arbitrary Lagrangian-Eulerian methods
Multiphysics coupling methods
High Performance Computing in FSI
Theoretical developments in FSI and moving boundaries
Industrial applications
This mini-symposium is dedicated to the dynamic response of structures immersed in a flow. In the considered global system, the principal loading can be caused either by the flow itself, i.e., flow-induced vibration (FIV), or by the combination of the flow and additional loads, such as seismic events. There are no restrictions on the range of applications and industrial fields addressed, including, for example, power plant components, biological processes and offshore engineering.
The mini-symposium adds an original and specific focus on multi-scale modeling and upscaling to share knowledge and experience on the most suitable ways to provide certified computational mechanics tools to end-users in an engineering environment, with the best level of confidence and validation, relying in particular on thorough quantification and propagation of uncertainties from the small scale(s) to the industrial scale.
In agreement with the general context above, the mini-symposium will gladly welcome contributions addressing either some key aspects of the proposed multi-scale framework or the scale management and upscaling strategies themselves. For illustrative purposes, papers are encouraged to address topics such as:
fluid-structure coupling and advanced modeling at the local scale,
homogenized and porous modeling at an industrial scale and its link with the local scale,
uncertainty quantification and propagation to derive confidence intervals for industrial tools,
validation of numerical strategies at every scale of interest.
The contributions are expected to emphasize the specifics of fluid-structure interactions (FSI), with particular attention paid to stable and unstable regimes (including fluid-elastic instability, for example), the effects of added coefficients, such as mass and damping, and the impact of turbulence at various scales. Mini-symposium topics also include compressible effects and wave propagation, depending on the considered systems. Moreover, strongly non-linear behavior can be considered for the structures, including contact and impacts. The way to handle such non-linearities within multi-scale and upscaling strategies will represent a valuable source of knowledge and exchanges between the contributors.
Explore the forefront of computational mechanics in the context of flow-induced vibrations (FIV) at our symposium. We invite contributions encompassing:
Development of Computational Methods: Delve into novel and efficient computational techniques, including monolithic, partitioned iterative, body-fitted moving boundary, immersed methods, as well as Eulerian/Lagrangian approaches, and more as they apply to FIV . Propose new ideas for flow-induced vibration/noise, contacts and fretting, multiphase flow and free-surface, and other interacting physical fields.
Application of Computational Software to FIV Applications: Present your application of numerical methods for FIV , whether with in-house codes or commercial software solutions , emphasizing their applications in aerospace, offshore engineering, biomedical sciences, acoustics, renewable energy, nuclear engineering and other fields.
Scaling for Real-World Challenges: Discuss parallel computing algorithms, acceleration techniques, and reduced-order modeling tailored for addressing large-scale FIV problems, bridging the gap between simulations and practical solutions.
We encourage an inclusive atmosphere where contributions that may not directly match the description but hold potential interest for the computational FIV community are also welcome. Join us in advancing our understanding of FIV and its crucial applications across various fields in this symposium dedicated to computational mechanics.
1700 Geomechanics and Natural Materials
This minisymposium serves as a platform for scientists and engineers operating in the realm of computational wood mechanics, wood technology, and related bio-composite computational mechanics. The papers submitted should reflect recent advancements and breakthroughs in the analytical and numerical exploration of the mechanical and physical properties of wood, bio-composites, and structures created from these materials. We also invite papers detailing developments in wood processing, innovative wood and bio-composites, and novel experimental investigations.
The topics that the minisymposium encompasses include:
Theoretical, numerical, and experimental investigations related to computational mechanics of wood and bio-composites across different length scales.
Microscale studies of wood and bio-composites, focusing on cell behavior, fibers, pulp, and paper.
Macroscale investigations into solid wood, wood- and plant-based products, laminated components, and joints.
Structural scale research, centering on building constructions and construction details.
Thermal-hydro-mechanical-chemical (THMC) processes in the subsurface are vital to the understanding natural evolutionary processes and engineered systems including carbon sequestration, energy storage, nuclear waste disposal, geothermal systems, oil and gas development, and mining. Fractures and fracture networks are important in many cases. Numerical modeling of such settings results in nonstationary, nonlinear, coupled problems, possibly subject to inequality constraints, and is crucial for advancing scientific discoveries and engineering. This mini-symposium invites contributions, to the modelling of two or more coupled processes, including but not limited to novel abstractions & mathematical models, discretization methods, coupling methods, machine learning, neural networks and artificial intelligence, and advancements to support large-scale simulations. Examples of relevant contributions may include but are not limited to:
Numerical models such as finite element, discrete element, and machine-learning based models
Nonlinear constitutive models of geomaterials including phase-field and non-local plasticity
Developments to address ill-conditioning and scalability of solutions
Integration of multiple numerical and/or analytical methods to achieve more efficient and accurate multiphysics models
Geomaterials, such as rock, soil, concrete, and timber are consisted of constituents, characterized by multiple length scales. Response and interaction of these constituents determines the macroscopic performance of these materials and the related structures. The latter are inevitably subjected to multifield effects, e.g. in terms of mechanical loading, temperature and moisture changes, and chemical reactions. These effects can lead to stresses and even micro-/macro-cracking, addressing the need of discontinuum analysis, apart from the continuum analysis. Therefore, this symposium is intended to provide a forum to present recent advances in geomechanical research, involving the aforementioned multiscale, multifield, and continuum-discontinuum analyses.
Topics within the scope of interests include, but not limited to, the following aspects:
Multiscale modeling of geomaterials and the related structures, (i.e. concurrent/hierarchical modeling, domain decomposition, discrete/continuum coupling…);
Advanced computing and simulation methods (hybrid physics & data, artificial intelligent, automation, probabilistic and statistical approaches, wavelet signal processing);
Continuum and discontinuum modeling of soils, rocks, timber, and concretes;
Advanced numerical methods or algorithms in soil-structure interactions;
Multi-physical couplings between mechanical, hydraulic, hygroscopic, thermal processes, and chemical kinetics;
Large-scale modeling and high-performance computing of geomaterials in underground structures.
Cross-disciplinary contributions are particularly welcomed.
Recent warming of the cryosphere due to climate change is causing significant impacts. Permafrost thaw has resulted in infrastructure damage and coastal erosion and may eventually lead to large greenhouse gas releases. Melting and calving of ice sheets in Antarctica and Greenland have led to global sea level rise creating risks to coastal infrastructure. Arctic sea ice loss has led to increased maritime activity in the region and may be influencing ocean circulation and mid-latitude weather patterns. Accurate modeling of these cryosphere systems is key to predicting future changes and informing public policy.
The focus of this minisymposium is on new computational methodologies for simulating cryosphere systems (land ice, sea ice, permafrost, etc.) and their interaction. The goal is to bring together researchers working on a broad range of cryosphere modeling topics to discuss recent advances and identify synergies.
Topics of interest include, but are not restricted to, the following:
Novel numerical discretizations for ice and permafrost mechanics
New constitutive models
Mechanics-based formulations of ice fracture/calving
Multiscale methods for coupling models with different spatial/temporal scales
Efficient solvers and methods for improving computational performance
Advanced analysis techniques including data assimilation and uncertainty quantification
Data-driven approaches to modeling
We propose this mini-symposium focusing on recent advances in computational approaches in geomechanics. The mechanical behavior of porous granular materials such as soils, rocks, and concrete is highly complex and requires sophisticated computational modeling. Such modeling plays a pivotal role in many engineering practices related to civil infrastructure, energy, and the environment. This mini-symposium aims to provide a forum for the presentation and discussion of recent research in computational geomechanics. Contributions are solicited in, but not limited to, the following topic areas:
Development, implementation, and validation of constitutive models for geomaterials
Computational methods and algorithms for coupled poromechanics and other multi-physics problems
Granular mechanics and other micromechanics approaches to geomaterials
Multiscale modeling techniques
Meshfree methods for large deformation problems
Numerical modeling of fracture and damage processes
Uncertainty quantification and probabilistic methods
Data-driven/machine-learning methods for geomechanics
We welcome submissions from researchers and practitioners from both academia and industry. The mini-symposium will provide an opportunity for participants to exchange ideas, share their research findings, and discuss challenges and future directions in computational geomechanics. We anticipate that this mini-symposium will attract a diverse group of researchers and practitioners and will contribute to the advancement of computational geomechanics.
1800 Data Science, Machine Learning and Artificial Intelligence
The fast growth in practical applications of deep learning in a range of contexts has fueled a renewed interest in deep learning methods over recent years. Subsequently, scientific deep learning is an emerging discipline that merges scientific computing and deep learning. Whilst scientific computing focuses on large-scale models that are derived from scientific laws describing physical phenomena, deep learning focuses on developing data-driven models which require minimal knowledge and prior assumptions. With the contrast between these two approaches follows different advantages: scientific models are effective at extrapolation and can be fitted with small data and few parameters whereas deep learning models require a significant amount of data and a large number of parameters but are not biased by the validity of prior assumptions. Scientific deep learning endeavors to combine the two disciplines in order to develop models that retain the advantages from their respective disciplines. This mini-symposium collects recent works on scientific deep learning methods covering theories and algorithms for both forward and inverse problems with applications in engineering, sciences, and scientific computing.
In the present days, since more and more powerful heterogeneous computers are continuously emerging, scientists and engineers have been facing unprecedented challenges of adapting their workflows to the challenges posed by digital twins and scientific machine learning. This mini-symposium intends to provide a forum for attendees to exchange information, share best practices, and to keep current on the rapidly evolving information technologies impacting the convergence of simulation tools, digital twins and scientific machine learning. The Mini Symposium topics cover (but are not limited to):
Computational environments for advanced scientific machine learning and engineering computation
Digital prototyping techniques
Enabling software technologies
Data science in computational mechanics applications
Software libraries and applications to for digital twins, model reduction, and machine learning
Supporting tools in performance evaluation, visualization, verification and validation
Scientific workflows, theoretical frameworks, methodology and algorithms
With increasing prevalence of data-driven computational tools, novel methods for uncovering complex nonlinear relationships between otherwise-disparate data have achieved tremendous improvements on a range of tasks, including but not limited to problems in computational mechanics. However, measuring causality, and not merely correlation, among these relationships remains a difficult task: many theoretical, practical, and computational issues persist, particularly concerning graphical model recovery, causal attribution, confounding relationships, measurement error, causal time-series models, and complex casual mechanisms.
This minisymposium will convene world-class researchers in a forum to present advances in causal inference, causal discovery, and structured causal models, drawing upon expertise in machine learning, statistics, scientific computing, and specific domain applications.
Application of artificial intelligence technology in the field of computational mechanics has been established for a long time. However, many examples of applying deep learning technology currently dominating the world to computational mechanics have not been reported yet. The objective of this mini-symposium is to discuss how to apply artificial intelligence such as deep and machine learning technologies to computational mechanics. We warmly welcome anything related to computational mechanics or artificial intelligence toward uniting both technologies into significant and beneficial applications. Particularly by using deep learning, it is necessary to discuss examples that make it possible to simulate objects that were difficult to simulate in the past, or to improve the accuracy of simulations that have been done in the past.
This MS will have a special emphasis on enabling technologies for Digital Twins, where we adopt the following definition of a Digital Twin:
A digital twin is defined as a virtual representation of a physical asset, or a process enabled through data and simulators for real-time prediction, optimization, monitoring, control, and decision-making.
We find the use of the capability levels as shown in Figure 1 as very useful when describing a digital twin. The capability levels 1-2 correspond to the virtual twin, whereas 3-5 corresponds to the predictive twin that this MS is focusing on. To enable predictive twins, one may utilize Hybrid Analysis and Modelling (HAM) that combines classical Physic-Based Methods (PBM) accelerated by means of Reduced Order Modelling (ROM) together with Data-Driven Methods (DDM) based on sensor measurement analysed by use of Machine Learning (ML). Pure Data-Driven Methods based on sensor measurement analysed by any means of AI is also welcome. In general, this MS welcome contributions on enabling technologies that can facilitate Predictive Digital Twins. Advanced applications of Predictive Digital Twins are also welcome.
Figure 1. The capapbility levels of Digital Twins on a scale from 0 to 5.
Despite the technical importance of composite materials and structures, systematic design frameworks for them are limited because conventional optimization techniques face difficulties in handling the high-dimensional design space consisting of an astronomical number of material combination and configurations as well as a complex nonlinear response beyond the linear response regime. With the advancement of machine learning (ML) techniques, extensive efforts are underway to establish alternative data-driven design frameworks for finding the optimal microstructure, external shape, and processing condition of composite materials and structures. A key element to the efficient and successful design is the choice of appropriate ML models and design frameworks that are best suited to the properties of the target design space (size, number of variables, and variable ranges) and available training dataset (size, fidelity, and design space coverage). The aim of this mini-symposium is to foster and communicate the state-of-the-art on the data science and ML-basedmethods for the design of composite materials and related biomedical engineering applications. Topics of interests include but are not limited to the following:
Novel machine-learning algorithms for material designs and their comparisons
3D-printed bio-inspired composites with complex microstructures, superior properties through optimization
Lightweight structural composites with optimized material usage for superior strength, toughness, fatigue life
Multifunctional composites with multi-objective ML-based optimization
Design, modeling, and synthesis of bio-inspired hierarchical composites
Prediction and design of the nonlinear responses of flexible composites
Optimization of processing conditions of composites for minimal manufacturing defects
Surrogate models based on machine learning methods for engineering applications
Data science applications for biomedical engineering
This mini-symposium continues the effort of bringing together researchers that work on advancing the data-driven, reduced-order modeling, and machine learning approaches within the realm of solid mechanics. In recent years, advancements in machine learning and data-driven approaches provide new research directions for the modeling and simulation of complex problems in solid mechanics. Several promising directions emerge in the field, ranging from directly exploiting data for computational mechanics without constitutive laws, applying deep learning including manifold learning and autoencoders for reduced-order modeling of nonlinear high-dimensional mechanics problems, to integrating data-driven machine learning techniques with physics-based models for various forward and inverse problems such as the discovery of the underlying governing equations of complex material and physical systems. This mini-symposium focuses on recent research developments and applications in data-driven approaches for computational solid mechanics, with topics that include but are not limited to: 1) Model-free data-driven computational mechanics; 2) Physics-informed machine learning for linear and nonlinear solid mechanics; 3) Data-assisted modeling of heterogeneous materials; 4) Data-driven discovery of constitutive laws and governing equations; 5) Interpretable discovery enabled by machine learning; 6) Causal discovery for explainable modeling; 7) Supervised/Unsupervised data/physics-driven learning of surrogate models; 8) Reduced-order real-time simulation of solid; 9) Inverse design problems with machine learning.
During the past decades, the desire to design complex materials and, with it, the necessity to understand their mechanical behaviors has given rise to extensive research in multiscale modeling. Advances in machine learning methods are emerging techniques sought to accelerate multiscale modeling and time-to-solution predictions of material behaviors. Recent works on advanced machine learning methods such as deep material network is one notable method that differentiates itself from those conventional machine learning techniques by featuring binary-tree network and micromechanics building blocks. Deep material network gathered enormous attention due to its ability for fast and accurate nonlinear modeling, while only trained on linear elastic data, of a wide range of material systems such as particle-reinforced microstructures, fiber-reinforced composites, and polycrystalline materials. Recent surge in micromechanics/physics-embedded neural networks, which feature explainable parameters and potentials on extrapolating nonlinear material behaviors of microstructures, has pushed machine learning and multiscale modeling to computer-aided engineering at an industrial scale. Hence, we envision that more endeavors are needed in all aspects of mechanistic machine learning-based multiscale modeling.
In this minisymposium, we not only wish to share the cutting-edge research works of machine learning and multiscale modeling, but also to identify the emergent needs of industry to make more rapid progress in practical applications. Topics of interest for this minisymposium include, but are not limited to the following:
High performance computing/accelerated computing for machine learning
Recent advances in deep material network for classes of microstructures
Representative volume element techniques
Adaptive sampling and transfer learning strategies for data generation
Training strategies for calibration of modern neural networks
Interactive learning techniques for structure-property space explorations/predictions in materials design
Material modeling and uncertainty quantification
Mechanistic machine learning-based methods for multiscale simulations at an industrial scale
Mechanistic machine learning-based methods for multiscale failure analysis
Multiscale topology optimization
The recent boom in machine learning (ML) and artificial intelligence (AI) has opened new horizons in computational mechanics. One of the areas in which ML and AI have had tremendous impact is replacing the traditional way in which constitutive models of complex materials are developed. For example, ML and AI have helped solve problems such as: i) Homogenization of the response of multiscale materials with complex microstructure; ii) Discovery of closed-form material models out of a large library of possible models; iii) Posing of inverse problems for identification of heterogeneous model parameters either by replacing the forward solver with ML surrogates or by directly learning the inverse map from imaging data. These advances have been applied to different materials such as soft tissues (heart, skin, arteries, brain, etc), metals, elastomers, soils, among others. Beyond modeling individual material samples, ML tools rooted on a Bayesian formulation have been recently proposed to learn the probability distribution for materials which show inherent variability either because of uncertainty in microstructure or, in the case of soft tissue, due to inherent variability from one individual to another. One major focus for ML and AI constitutive modeling has been the imposition of physics constraints as well as combination of data-driven approaches with established modeling techniques such as micromechanics or microstructure-inspired models. For imposing physics, the loss function can help enforce the constraint, but more recent formulations have architectures that guarantee desired physics for arbitrary parameters. Regarding connection to modeling approaches that use information from the microstructure, there have been recent efforts in interpretable ML models. One of the key advantages of ML and AI constitutive modeling has been the flexibility to capture a wide range of material behavior, from linear elastic, to hyperelastic, viscoelastic and plastic deformation, even in the large deformation non-equilibrium regimes. Finally, in order for data-driven constitutive modeling to be truly useful in practice, implementation into numerical solvers such as finite element packages is needed. This symposium encourages submissions regarding machine learning constitutive models for any kind of materials, and physics (hyperelasticity, viscoelasticity, plasticity, etc…) with a focus on probabilistic frameworks for material property inference, and large scale simulations.
This mini-symposium focuses on scientific machine learning (ML) methods for geophysical applications for both Earth- and space-borne applications. These include earthquake science, weather, climate, ocean sciences/engineering, and oceanography, as well as solar physics, and space weather, among others. Novel scientific ML methods, and benchmarking against established techniques are sought for, as well as applications of scientific ML to new applications. The aim of this mini-symposium is to provide a platform for investigators to disseminate and discuss scientific ML methods within the broad area of geophysics on both Earth- and space-related applications. Interdisciplinary data-driven methods and their integration with scientific machine learning, along with new ideas in terms of efficient software implementation are encouraged. Novel datasets for benchmarking scientific machine learning techniques are also welcome.
With the steady development of computer science, machine learning and data science have made significant progress in recent decades. These techniques generally rely on a substantial amount of data samples to extract the abstract mapping hidden within the data. Hence, these technologies have gradually attracted the attention of researchers in the field of computational mechanics and computational engineering. This mini-symposium aims at bringing together mechanicians, computer scientists, and industrial researchers to promote research and application in big data analysis, data driving computing and artificial intelligence in engineering as well as the scientific exchanges among scientists, practitioners, and engineers in affiliated disciplines.
The topics of interest are, but not limited to:
Data-driven based constitutive modelling
Machine learning based solutions of PDEs
Big data for design and optimization
Data-driven simulation techniques
Data-driven techniques in multi-scale and multi-field simulations of materials
Data-driven modelling of geo- and environmental data
Visualization and visual analytics of geo-data
Data-driven techniques for continuous and discrete method
With the synthesis of new high-throughput methods, materials R&D is readying for the discovery, characterization, and design of robust materials and manufacturing processes through the development and implementation of machine learning algorithms spanning multimodality, physics constraints, Gaussian processes, and causal inference. The fusion of human expert materials knowledge with multimodal, physically constrained, machine learning algorithms can aid in detection of "fingerprints" critical in materials behavior, prognose component performance, and adapt manufacturing strategies.
This minisymposium convenes world-class researchers in advanced manufacturing, materials characterization, data science, modeling/simulation, and hardware engineering to showcase works with the ability to further materials discovery, characterization, and design. Researchers from national labs, academia, and industry will present and discuss topics such as hybrid, physics—informed machine learning methods to understand process-structure-property mappings, surrogate models using multimodal data streams combining experiments and simulations, and machine learning guided process optimization.
Circuit simulations, often referred to as Spice simulations, are foundational to modern circuit design. Since their inception in the 70s, the prevalent approach in Spice simulations has been to build the underlying circuit models from compact analytic device models. However, this approach is prone to two distinct types of bottlenecks. The first one stems from the fact that traditional development of compact device models is largely a manual, time intensive effort that requires highly skilled experts with combined knowledge of solid-state physics, circuit design, model calibration, and numerical analysis. As a result, development of these models often lags behind the initial design stages of microelectronics circuits. The second, performance bottleneck arises when one attempts to scale traditional Spice simulations to large circuits comprising millions of components. Since compact models for these components can be quite complex on their own, together, they can lead to very large nonlinear systems of equations that are very expensive computationally. This hinders utilization of Spice simulations in multi-query design analysis tasks such as quantifications of the margins of uncertainty, reliability, and optimal design of circuits.
Data-driven approaches such as system identification, model order reduction, non-intrusive operator inference, dynamic mode decomposition, and deep neural network regression, to name a few, have the potential to overcome these bottlenecks by (i) providing the means to automate the development of compact semiconductor device models, either directly from data or from full-featured TCAD (technology computer-aided design) device models, and (ii) enabling the development of computationally efficient surrogates for full-featured circuit models. This session will focus on recent advances in the application of these ideas, ranging from purely data-driven to gray box and physics-informed machine learning approaches to the agile development of computationally efficient models for devices and circuits.
Successful models of the natural world frequently share several mathematical features: they might be continuous, sparse, differentiable, integrable, generalizable, and parsimonious. Historically, such models have been derived from first principles and parameterized via experimental measurements. However, in the recent decades new theoretical approaches, abundant data, and cheap computational power have given rise to data-driven models of complex systems. Such modeling builds upon the theoretical results in dynamical systems, statistical mechanics, information theory, and machine learning. With data-driven techniques we can now extract the governing differential equations from empirical trajectory data, identify the dominant modes of collective behavior, infer the interaction networks from noisy observations, and reconstruct the full system state from a few localized measurements.
In this minisymposium we aim to bring together researchers working on both the theoretical basis and the applications of data-driven models. While many data-driven methods have been established, they call forth new questions. How much noise can a method tolerate and how exactly does the inference break down with high noise? How to integrate multiple data sources with varying degrees of accuracy into the model? How does one identify the best coordinates to express system dynamics concisely? What if not all of the relevant variables were measured? How do the dynamics depend on external parameters? Which variables causally affect the model outcome? How to validate the model form for problems without ground truth labels? When is model accuracy non-monotonic in model and data size, as in the double descent phenomenon? We welcome studies of physical phenomena such as hydro- and aerodynamics, elasticity, electromagnetism, and material properties, as well as biological, neural, ecological, and social behaviors.
Artificial intelligence has made revolutionary advancements across a diverse range of fields, including image recognition. Furthermore, in the domain of computational mechanics, its applications have expanded. Particularly, research on regressions and predictions using feed-forward networks has been extensive. On the other hand, design and inverse problems necessitate the estimation of design solutions or internal states from given design specifications and observations. In such cases, more intricate approaches are required beyond simple deep neural networks, including ingenious structural designs, utilization of generative models, incorporation of the underlying domain expertise (model, equations, knowledge) in the construction and design of neural networks, etc. We center our attention on this intricate and stimulating domain, inviting a broad spectrum of research related to design and inverse problems.
Building upon existing relevant studies, This MS welcomes novel approaches to both design tasks and inverse problems. Furthermore, within this MS, we welcome methodologies incorporating mechanics models and approaches aimed at enhancing interpretability. This pursuit aims to amplify the accuracy of learning and inference, while also emphasizing the interpretability of outcomes. Practical examples of applications are also welcome.
Within this session, we particularly welcome, but not limited to, research concerning:
Machine learning for design tasks
Machine learning for inverse problems
Physics-based machine learning methods for design and inverse problems
Explainable / Interpretable machine learning
Industrial application of machine learning in design and inverse problems
Symbolic Regression, a dynamic approach rooted in machine learning and symbolic computation, has garnered significant attention within the realm of computational mechanics. It enables data-driven modeling that is inherently interpretable, i.e., producing symbolic representations (equations) that best describe a dataset. Because computational mechanics is built around symbolic expressions the equations produced by symbolic regression are readily incorporated into a variety of preexisting workflows, including analytical derivations used in computational applications. This minisymposium seeks to illuminate the latest breakthroughs and applications of symbolic regression techniques in advancing the simulation, analysis, and optimization of complex mechanical systems. By harnessing the potential of symbolic regression, researchers are discovering novel avenues to enhance accuracy, model interpretability, and computational efficiency in the domain of computational mechanics.
This minisymposium will explore a comprehensive array of topics that include, but are not limited to:
Equation Discovery and System Identification: Symbolic regression techniques enable the discovery of governing equations from experimental or simulated data, facilitating the identification of system dynamics and behaviors without a priori assumptions.
Physics-Informed Machine Learning: Integrating domain-specific physical insights into symbolic regression yields hybrid models that combine data-driven learning with established physical laws to enable identification of the physical significance of individual model components while also ensuring greater interpretability, model generalization and robustness.
Uncertainty Quantification and Sensitivity Analysis: Symbolic regression streamlines the quantification of uncertainties and sensitivities within computational mechanics models, providing a deeper understanding of system behavior through traceable uncertainty propagation.
Optimization and Design: The application of symbolic regression in optimization scenarios empowers researchers to efficiently explore design spaces, identify optimal configurations, and unveil design principles governing complex mechanical systems.
Interpretable AI in Mechanics: The symbolic nature of regression results in interpretable models, enhancing the transparency and trustworthiness of AI-driven decisions in computational mechanics applications.
Improving and accelerating materials development is an important goal in science and industry, as innovative, tailored, and optimized materials ranging from metals and polymers to modern composites and architected materials are key to a sustainable future. Driven by progress in digitization and high-throughput experiments that enhance data availability, the interest in data-driven techniques, which facilitate material design, is constantly increasing.
Research in the field of data-driven modeling and design of materials covers a wide range of topics. For instance, techniques for reconstructing microstructures may enable the generation of appropriate simulation domains, i.e., realistic representative or statistical volume elements. Computational homogenization of these microstructures facilitates prediction and understanding of the interplay between effective properties and microstructural features of complex materials. To describe the behavior of materials with high precision, model-free approaches that directly exploit data or techniques based on machine learning models that learn from data could be used. In this context, the trend to enrich machine learning approaches with knowledge from fundamental underlying physics enables an improved extrapolation capability and the use of sparse training data. Beyond constitutive modeling, data analysis and machine learning help to exploit knowledge from simulations in terms of surrogate models and are, therefore, key to the prediction of structure-property linkages for the computational design and optimization of materials and structures. Finally, the inverse design of suitable microstructures that ensure certain effective target properties is a task for which machine learning methods are predestined.
Topics of interest covered within this mini-symposium include but are not limited to:
machine learning in computational mechanics and constitutive modeling,
data-driven multiscale simulations,
microstructure characterization and reconstruction, e.g., 2D and 3D image-based methods,
techniques for exploration and inversion of process-structure-property linkages or part of it,
inverse design and optimization approaches for metals, polymers, composites, and
architected materials,
design approaches that account for crucial manufacturing constraints, as well as
data-driven-assisted numerical and experimental analysis of new materials across scales.
As scientific machine learning (SciML) matures we need to move from proof-of-concept training SciML models from data rich sources, such as other models, to more limited sources, e.g. limited modality experimental data. We solicit talks on training techniques and models suitable for generating effective representations of experimental and expensive, high fidelity simulation data.
Contributions including:
sparsity-inducing
multifidelity
symmetry preserving
optimal sampling/experimental design
techniques are encouraged, particularly those that combine these and/or related approaches.
Modeling complex systems typically involve the evaluation of integro-differential operators that map fields (e.g., initial and boundary data, parameter fields) into solution fields. These operators are traditionally approximated using discretization approaches such as finite difference or finite elements. An emerging alternative approach is to approximate the operators with neural operators, that is, with deep-learning based models. Once trained, neural operators are much faster than traditional computational models, and they are advantageous in different contexts, e.g., in inverse problems, uncertainty quantification and in high-dimensional problems. In addition, neural operators can naturally assimilate observations and experimental data and therefore can potentially achieve higher fidelity than traditional models.
Several neural operator architectures have been proposed in the literature, e. g., Deep Operator Networks, Fourier Neural operators, Kernel Graph Operators, featuring different strengths and weaknesses.
This mini-symposium focuses on both theoretical and computational aspects of neural operator modeling, covering the design of neural operators, their training and use in the context of computational mechanics applications.
Topics of interest include, but are not limited to: training methods using multi-modal or multi-fidelity data, training approaches that are parallel and scalable, strategies to enforce physical constraints or property preservation, design of hybrid models combining traditional simulation codes with neural operators, accuracy and epistemic uncertainty of neural operators.
The rapid development of computational technologies in artificial intelligence (AI) and machine learning (ML) has started to revolutionize many aspects of our lives, while also significantly changing the way computational modeling and simulation are performed. Indeed, ML and other intelligent statistics techniques extend the applicability of computational mechanics, molecular modeling, topology optimization, and structural design, for instance, by combining physics-based simulations and data-based inference. In this mini-symposium, we aim to provide a forum for the latest developments in applying AI-based technologies, such as ML in applied mechanics, materials, and engineering problems in general. We welcome all contributions, with particular interests in these areas:
Applications of computational data science to design of materials at micro and meso scales.
ML approaches to molecular dynamics and finite element methods.
AI-based methods and approaches to additive manufacturing and 3D printing of complex materials.
Data-driven methods for design, synthesis, and characterization of polymers and their composites.
AI-based approaches to materials characterization and analysis.
Hybrid methods in topology optimization.
Novel machine learning algorithms are being used in combination with physics-based modeling in engineering to tackle traditionally intractable problems. For example, dynamic applications requiring fast and reliable feedback, ultimately in real-time, or highly complex systems involving intractable transformations. Many advances have been made in this field with multiple researchers incorporating scientific machine learning to solve different numerical tasks in computational science and engineering. Recent works tackle the developments in the models addressing physics-based machine learning techniques and applications related to industrial systems and processes, as well as model augmentation with data. However, many challenges remain in incorporating scientific knowledge into black-box machine learning techniques, in a robust and reliable way. For example, the black-box surrogate model’s stability is to be guaranteed before any real-life application of surrogate models. Stability is particularly important for dynamic applications involving an additional layer of complexity, like when creating an integrator, without accessing the correct outputs of the physical system at previous time steps. We propose a minisymposium focusing on the use of model augmentation with machine learning tools for the simulation, optimization, and control of real-time industrial processes. This minisymposium aims to bring together researchers from diverse backgrounds to exchange novel ideas and initiate new lines of research addressing the challenges hindering the efficient and robust use of surrogate modeling, enhancing therefore classical models in engineering and industrial applications. Particular focus is placed on addressing major challenges in physical models and model order reduction augmentation for complex problems and digital twinning applications, in particular:
Nonlinear surrogate modeling
High dimensional parameter space
Surrogate modeling of real-world and industrial applications
Augmented physical models with data
Physics-informed, data-based, machine learning models
Stability and control issue in surrogate models
Integrators construction and optimization
Emulators constructed from deep neural networks trained on data, have recently emerged as a computationally efficient alternative to solving complex systems of ordinary/partial differential equations that are often utilized in single- or multi-physics computational applications in science and engineering. However, the efficient, accurate, and stable training of state-of-the-art models requires large volumes of data extracted from high-fidelity numerical simulations or costly experiments, which effectively prohibits the application of these methods on high-fidelity applications. At the same time, there exists an opportunity to generate simplified models to generate multiple data sources by simplifying the high-fidelity numerical model, e.g., the discretization of PDEs or the underlying governing equations. Consequently, a limited quantity of high-fidelity outputs or experimental data can be supplemented with a substantial volume of results obtained from simplified models. The main drawback of these so-called “low-fidelity” models is that they are typically biased and do not retain the high-fidelity prediction capabilities necessary for trustworthy predictions. As a consequence, augmenting sparse, high-fidelity data sets with these less-expensive simulations requires careful consideration to avoid the corruption of information contained in the original high-fidelity model. This is often called “negative transfer” in the machine learning community. Several recent advancements in the areas of multi-fidelity and transfer learning have demonstrated the potential benefits of this approach. In this minisymposium, we will welcome contributions that develop, discuss and/or demonstrate approaches designed to lower the computational budget associated to the multi-fidelity training of high-fidelity data-driven models. We are particularly interested in contributions that assess the trustworthiness and reliability of the proposed emulators.
Sandia National Laboratories is a multimission laboratory managed and operated by National Technology and Engineering Solutions of Sandia LLC, a wholly owned subsidiary of Honeywell International Inc. for the U.S. Department of Energy's National Nuclear Security Administration under contract DENA0003525. This paper describes objective technical results and analysis. Any subjective views or opinions that might be expressed in the paper do not necessarily represent the views of the U.S. Department of Energy or the United States Government.
In this mini-symposium, we propose to meet around the recent advances in the fields of reduced order modeling and machine learning applied to industrial and mechanical large-scale problems. Nowadays, we need to give a response to the environmental requirements in almost all the mechanical domains. Typically, for the aeronautical industry, a new design process exhibits large-scale multi-disciplinary problems often solved in a highly coupled fashion in order to explore a number of configurations of aeronautical components. As we like to provide new industrial designs that are less consuming and less polluting, the design process must fulfill the same exigencies. The large-scale problems – based design exploration is highly consuming in CPU resources. High Performance Computing is inevitably an important tool for the verification and the validation of a design of interest. However, we need to leverage the exploration phase with even more efficient methods. Hence, the exploration phase is a machine learning – like inference one and the verification phase is a high-fidelity large-scale problem – like resolution one.
This mini-symposium is the opportunity to discuss how these machine-learning approaches are becoming unavoidable. A discussion about the possible numerical certification of these techniques is required in the light of the recent advances in uncertainty quantification in reduced order modeling and machine learning. The subjects addressed in this mini-symposium include, but are not limited to:
Physics-based machine learning for regression of scalar and field quantities of interest
Physical reduced-order modeling and its hybridation with AI technologies
Uncertainty propagation and predictive uncertainty quantification
Nonparameterized variability – including the geometry
Application to structural mechanics and thermics, computational fluid mechanics, electromagnetism
Efficiency in training and inference stages
This mini-symposium (MS) explores the intersection of machine learning and complex simulations. The MS comprises four distinct sections, each addressing critical aspects of enhancing computational methods through the application of machine learning techniques. This symposium provides a comprehensive view of the exciting possibilities that emerge when machine learning and complex simulations converge. Attendees can expect to gain practical knowledge, explore innovative approaches, and connect with experts in the field, ultimately advancing the efficiency and accuracy of computational methods in the context of large-scale simulations for multiphase flows, flow in porous media, turbulent flows, etc.
Some of the potential topics are, but not limited to, the following
Leveraging machine learning (ML) techniques for large scale multiphase flow simulations
Improving computational efficiency in multiphase flow simulations through ML
Machine Learning models for enhancing grid generation
Efficient grid generation is fundamental to accurate simulations.
Using ML in multiphase flows
Using ML for replacing sub-grid-scale eddy viscosities in turbulent flows
Prediction of fluid flow and heat transfer in porous media
Practical insights and case studies showcasing the real-world application of machine learning techniques
How to integrate machine learning into the simulation workflows for tangible performance enhancements
Data mining in complex simulations
2000 Other
Beijing International Center for Theoretical and Applied Mechanics (BICTAM) is an international non-governmental scientific organization, which is affiliated to the International Union of Theoretical and Applied Mechanics (IUTAM). Within WCCM2024’s framework, BICTAM plan to organize a mini- symposium on computational mechanics to bring the scientists from Canada and China together for the future cooperation.
Authors and presenters are invited to participate in our symposium to expand international cooperation, understanding and promotion of efforts and disciplines in the area of the joint Canada-China sessions on computational mechanics. Dissemination of knowledge by presenting research results, new developments, and novel concepts in joint Canada-China sessions on computational mechanics will serve as the foundation upon which the conference program of this area will be developed.
A variety of topics/sessions are available for presentations flexibility to the authors. All sessions are quality driven.