List of Accepted Special Sessions
FUSION 2024 has accepted 14 special sessions promoting a focused discussion of innovative topics in information fusion research.
Authors are invited to check whether their submissions topically fit into one of the proposed special sessions.
(please click for details)
Special Session Descriptions
A. Bayesian Neural Networks
Organizers
- Uwe D. Hanebeck, Intelligent Sensor-Actuator-Systems Laboratory (ISAS) Institute of Anthropomatics, Department of Informatics Karlsruhe Institute of Technology (KIT), Germany
- Marco F. Huber, Center for Cyber Cognitive Intelligence Fraunhofer IPA and Institute of Industrial Manufacturing and Management IFF University of Stuttgart, Germany
- Lyudmila Mihaylova, Automatic Control and Systems Engineering University of Sheffield, Sheffield, United Kingdom
- Hayk Amirkhanian, Center for Cyber Cognitive Intelligence, Fraunhofer IPA Stuttgart, Germany
- Marcel Reith-Braun, Intelligent Sensor-Actuator-Systems Laboratory (ISAS) Institute of Anthropomatics, Department of Informatics Karlsruhe Institute of Technology (KIT), Germany
- Markus Walker, Intelligent Sensor-Actuator-Systems Laboratory (ISAS) Institute of Anthropomatics, Department of Informatics Karlsruhe Institute of Technology (KIT), Germany
Abstract
Bayesian Neural Networks (BNNs) have garnered significant attention in recent years for their ability to provide a probabilistic framework that seamlessly integrates uncertainty into neural network predictions. Despite their impressive predictive capabilities, quantifying and assessing uncertainty in these nonlinear models poses challenges.
Consequently, both efficient, scalable approximations and new methods for verifying the trustworthiness of BNNs are needed to enable their reliable use in areas such as estimation and control.
This special session will cover a wide area of topics related to BNNs, including:
- Training and prediction methods, in particular, using nonlinear Kalman filtering and other nonlinear estimation methods.
- Addressing challenges associated with the computational demands of BNNs, seeking efficient algorithms and scalable solutions.
- Methods and metrics for assessing the calibration, trustworthiness, and quality in general of the (uncertainty) predictions.
- Utilization of BNNs for capturing and representing complex nonlinear relationships within nonlinear dynamic systems.
- Integration of BNNs in control strategies for improved system performance and adaptability
- Applications of BNNs for data fusion, estimation, and control.
B. Cooperative Localization and Multi-Target Tracking Over Networks
Organizers
- Mattia Brambilla, Politecnico di Milano
- Paolo Braca, NATO STO CMRE
- Peter Willett, University of Connecticut
- Stefano Coraluppi, Systems & Technology Research (STR)
Abstract
Cooperative localization systems constituted by a network of collaborative devices are gaining popularity in numerous application domains. Examples are in the fields of surveillance systems, wireless networks, IoT, automotive and more. According to the application domain, hardware availability and functionalities of the localized objects, cooperative systems lead to improved network localization and target tracking performances. Key feature is the ability to merge different types of information collected and/or generated at/by multiple devices. This implies the development of intelligent fusion strategies to coherently combine the individual observations of each single sensor. Such strategies can be embedded into centralized or distributed network architecture, where the fusion algorithm is located in a single processing center or it is distributed over the sensors themselves.
C. Online Evaluation of Filter Algorithms: Monitoring, Self-Assessment, Noise Estimation, Maneuvering Targets, and Adaptation
Organizers
- Thomas Griebel, Ulm University, Germany
Abstract
Over the past decades, the fusion community has focused mainly on improving and enhancing filtering and tracking algorithms. However, the online evaluation of existing fusion algorithms is at least as important as the development of new tracking algorithms. For example, in automated driving, the need for safety, reliability, and robustness has been identified and addressed in the ISO 21448 safety of the intended functionality (SOTIF) [1] standard. In this context, self-assessment or monitoring of fusion algorithms, in particular, is essential to ensure and fulfill the general safety requirements. Research in this area has been conducted under various names, such as monitoring, self-assessment, maneuvering targets, or noise estimation. These topics can be viewed as an approach to the online evaluation of filtering algorithms, which can potentially be used to perform an online adaptation process. This special session aims to bring together researchers in this area to discuss these highly topical challenges regarding safety and robustness, which are among the key issues of our time.
References
[1] International Organization for Standardization. ISO/PAS 21448: Road Vehicles –— Safety ofthe Intended Functionality.ISO, Publicly Available Specification, 2019.
D. Evaluation of Technologies for Uncertainty Reasoning
Organizers
- Erik Blasch, Air Force Research Lab, USA
- Paulo Costa, George Mason University, Fairfax, VA, USA
- Pieter De Villiers, University of Pretoria, Pretoria, South Africa
- Valentina Dragos, Onera, France
- Anne Laure Jousselme, CS Group, LaGarde, France
- Lance Kaplan, DEVCOM Army Research Laboratory, USA
- Kathryn Laskey, George Mason University, Fairfax, VA, USA
- Claire Laudy, Thales, France
- Gregor Pavlin, Thales, The Netherlands
- Ali Raz, George Mason University, Fairfax, VA, USA
- Juergen Ziegler, IABG, Ottobrunn, Germany
Abstract
The ETURWG special session started in Fusion 2010 and has been held every year, with an attendance consistently averaging between 30 and 50 attendees. While most attendees consist of ETURWG participants, new researchers and practitioners interested in uncertainty evaluation have attended the sessions and some stayed with the ETURWG. The 2024 ETUR special session will focus on the concept of Information Fusion Theory and Experimentation for Decision-Making under Uncertainty, and its connections with uncertainty representation and reasoning within the Information Fusion context. Topics of discussion will include:
- Ontology-based evaluation of uncertainty
- Uncertainty considerations of Large Language Models (LLM)
- Automated uncertainty evaluation
- Uncertainty provenance
- Explainability & interpretability
- Multi-modal fusion
- Decision making
- Assessment of High-Level Information Fusion Systems
- Use Case-based Uncertainty Evaluation and Experimentation
- Advances in AI for Uncertainty Evaluation
- Uncertainty Evaluation in AI systems
The discussion will not be limited to specific approaches and can cover a wide range of applications.
The 2024 ETUR special session will focus on exploring the different ways in which use case-based evaluation and experimentation can help decision-making, as well as uncertainty representation and reasoning within the Information Fusion context. This includes related work on machine learning, explainability, hybrid systems, human-machine teaming, automated vehicles, cognitive security, as well as other advanced knowledge representation and reasoning techniques. The impact to the ISIF community will be an organized session with a series of methods in uncertainty representation as coordinated with evaluation. The techniques discussed and questions/answers will be important for the researchers in the ISIF community; however, the bigger impact will be for the customers of information fusion systems to determine how to measure, evaluate, and approve systems that assess the situation beyond Level 1 fusion.
The customers of information fusion products will have some guidelines to draft requirements documentation, the gain of fusion systems over current techniques, as well as issues that are important in information fusion systems designs. One of the main goals of information fusion is uncertainty reduction, which is dependent on the representation chosen. Uncertainty representation differs across the various levels of Information Fusion (as defined by the JDL/DFIG models). Given the advances in information fusion systems, there is a need to determine how to represent and evaluate situational (Level 2 Fusion), impact (Level 3 Fusion) and process refinement (Level 5 Fusion), which is not well standardized for the information fusion community.
E. Advanced Nonlinear Filtering
Organizers
- Uwe D. Hanebeck, Intelligent Sensor-Actuator-Systems Laboratory (ISAS), Institute of Anthropomatics, Department of Informatics Karlsruhe Institute of Technology (KIT), Germany
- Ondřej Straka, Department of Cybernetics, Faculty of Applied Sciences, University of West Bohemia (UWB), Czech Republic
- Jindřich Duník, Department of Cybernetics, Faculty of Applied Sciences, University of West Bohemia (UWB), Czech Republic; and Aerospace Advanced Technology Europe, Honeywell International
- Fred Daum, Raytheon
- Daniel Frisch, Intelligent Sensor-Actuator-Systems Laboratory (ISAS), Institute of Anthropomatics, Department of Informatics Karlsruhe Institute of Technology (KIT), Germany
Abstract
Methods for Bayesian inference with nonlinear systems are of fundamental interest in the information fusion community. Great efforts have been made to develop state estimation methods that are getting closer and closer to the truth. Further objectives are to increase their efficiency, reduce their requirements / assumptions, and to allow their application in more general settings. Areas such as target tracking, guidance, positioning, navigation, sensor fusion, fault detection, and decision-making usually require the use of linear or nonlinear state estimation methods (i.e., of broad interest for the information fusion community).
These methods are used to provide a state estimate of a dynamic system, which is in general not directly measurable, from a set of noisy measurements. The development of state estimation started in the sixties with the appearance of the well-known Kalman filter (KF), and the use of simple linearization approaches to deal with nonlinear dynamic systems. Satisfactory performance of these legacy KF-based methods was limited to system models with mild nonlinearities, together with a perfect knowledge of the system, that is, both system functions, noise statistics distributions and their respective parameters.
For the last three decades, a huge effort has gone towards the derivation of
- State estimation techniques able to deal with highly nonlinear and/or non-Gaussian models, following either a Bayesian or an optimization approach, which allow a more informative description of the estimate through probability distributions or distribution parameters, and
- Robust estimation techniques able to cope with a possible model mismatch (including uncertainties in the noise description) or measurements corrupted by outliers. These methods were subsequently improved to increase their efficiency, reduce their requirements/assumptions, and to allow their application in more general settings.
This special session focuses on recent advances in nonlinear state estimation (filters, smoothers, and predictors) for both discrete and continuous time system models with areas such as:
- Nonlinear and/or Non-Gaussian Estimation
- Density specific estimators (e.g., Gaussian, Student’s-t, transformed Gaussian, Rayleigh, Laplace) including nested, sigma-point, or stochastic integration-based design,
- Global estimators such as point-mass, Gaussian mixture, or sequential Monte Carlo methods, a.k.a. particle filters, and Monte Carlo sampling methods,
- Particle flow, homotopy-based, and progressive estimators,
- Performance evaluation of estimation methods,
- Joint and dual estimation for state and model parameters estimation.
- Robust Estimation
- Robust techniques with partially unknown system models (system functions or noise statistics),
- Robust techniques for measurements corrupted by outliers or unexpected model behaviors,
- Linearly/nonlinearly constrained estimation.
- Efficient Estimator Design and Applications
- State estimation in high-dimensional spaces,
- Performance analysis of existing nonlinear filtering methods,
- Applications of nonlinear state estimation methods.
F. Applications of Stone Soup
Organizers
- Paul Thomas, Defence Science and Technology Laboratory (Dstl), United Kingdom
- Jordi Barr, Defence Science and Technology Laboratory (Dstl), United Kingdom
Abstract
The Stone Soup framework is a flexible, modular, open-source framework for developing and proving a wide variety of tracking and information-fusion-based solutions. Since its inception in 2017, it has aimed to provide the target tracking and state estimation community with an open, easy-to-deploy framework to develop and assess the performance of different types of trackers. Now, through repeated application in many use cases, implementation of a wide variety of algorithms, multiple beta releases, and contributions from the community, the framework has reached a stable point and is proving to be an essential tool in evaluation and characterization of tracking and state estimation approaches.
This special session highlights recent research contributions within the Stone Soup framework and emphasizes the evaluation and comparison capabilities. Discussions in this session will typically draw upon Stone Soup’s evaluation features to include comprehensive evaluation of a proposed approach against a number of other approaches in a number of use cases.
G. Multimodal Data and Explainable AI for Healthcare and Surveillance Technologies
Organizers
- Mohsen Naqvil, Newcastle University UK
- Lyudmila Mihaylova, University of Sheffield, UK
Abstract
The main demand on AI from the responsible data engineering bodies is to provide explainable AI. The open questions also on the difference between Trustworthy AI and Explainable AI. However, there is consensus that the present processing-based AI, in general, is neither Trustworthy nor Explainable. In this special session, the above question will be focused on GEO-located Multimodal AIS Data and Multimodal ADHD Data driven Explainable AI. The key target of this special session is to present interdisciplinary (mainly, engineering, medical and defence industry led) collaborative data and AI research approaches that require professionals from different domains to exchange knowledge. In particular, interdisciplinary research on ADHD mental health performed with the CNTW-NHS Foundation Trust will be presented. (This trust is one of the largest mental health and disability Trusts in England employing more than 7,000 staff, serving a population of approximately 1.7 million, and providing services across an area totaling 4,800 square miles. It works from over 70 sites across Cumbria, Northumberland, Newcastle, North Tyneside, Gateshead, South Tyneside and Sunderland UK.) This special session will call for theoretical and practical works in healthcare and security applications. The special session will be also open to everybody working in the Security and Surveillance, and Healthcare Technologies.
H. Multi-modal Fusion for Assured Positioning, Navigation, and Timing (PNT)
Organizers
- Zak M. Kassas, US Department of Transportation Center for Automated Vehicles Research with Multimodal Assured Navigation (CARMEN), Department of Electrical & Computer Engineering, The Ohio State University, USA
- Jindřich Duník, Department of Cybernetics, Faculty of Applied Sciences, University of West Bohemia (UWB), Czech Republic; and Aerospace Advanced Technology Europe, Honeywell International
Abstract
Development of modern navigation and timing algorithms is closely tied with the advent of state estimation and data fusion methods. State estimation methods provide a valuable tool to infer time-varying and unknown navigation quantities (such as position, velocity, or attitude and heading of a moving object) or timing information from a set of indirectly related and noisy measurements and a priori information on the dynamics of the object, all of which are tied together through a state-space model formulation. Data fusion methods can be seen as a further extension of state estimation methods, where multiple estimates or sources of information are merged together to get a “global” estimate with superior performance.
Navigation and timing algorithms are core components in a wide range of applications and devices of today’s society including (autonomous) transportation, wearables, robotics, or financial and power distribution services, to name a few. As such, the current and envisioned navigation algorithms are required to process measurements from a broad variety of heterogeneous sensors (including technologies such as inertial sensors, satellite navigation, signals of opportunity, altimeters, LiDARs, star trackers, or terrain and other maps) to provide high-quality navigation and timing information with required levels of accuracy, integrity, availability, and continuity. To fulfil stringent requirements on the navigation and timing information in challenging environments, novel state estimation, data fusion, fault detection, and system identification methods nonlinear/non-Gaussian models shall be designed and employed. In parallel, methods should be kept computationally feasible in order to process (nearly optimally) all the available information in real-time. In this context, advanced state estimation methods can take advantage of the recent developments in the area of the machine learning (ML) and artificial intelligence.
This special session focuses on recent advances and envisioned directions of state estimation, data fusion, fault detection, and system identification and modelling, as used in the design of novel navigation and timing algorithms. In particular, the session focuses on
- Integrated Navigation System Design
- Integration of standard sensors (e.g., inertial, GNSS) with emerging/anticipated sensors (e.g., LEO-PNT, signals-of-opportunity)
- Data-based reference systems (terrain-aided, gravity and magnetic field aided, celestial, vision)
- Sensor, environment, and map error modelling
- Navigation systems for manned and unmanned vehicles
- Certification perspectives
- Resilience and Integrity
- Resilience to interference, jamming, and spoofing
- Integrity monitoring under assumption of multi-constellation scenario (GNSS, LEO-PNT)
- Integrity of navigation information
- Machine Learning in Navigation
- Explainable ML-based navigation
- End-to-end ML navigation
- Data-augmentation of standard navigation algorithms
- State Estimation, Data Fusion, Fault Detection, and Modelling for Navigation
- Computationally lightweight algorithms for nonlinear/non-Gaussian models
- Numerical integration, Gaussian filters, point-mass and particle filters
- Performance bounds in state estimation
- Fault detection methods for integrity monitoring
- Data-based and physics-informed modeling.
I. Context-based Information Fusion
Organizers
- Jesús García, GIAA – University Carlos III de Madrid, Spain
- Lauro Snidaro, Department of Mathematics and Computer Science, University of Udine, Italy
- José M. Molina, GIAA – University Carlos III de Madrid, Spain
- Ingrid Visentini, LimaCorporate, Italy
Abstract
The goal of the proposed session is discussing approaches to context-based information fusion. It will cover the design and development of information fusion solutions integrating sensor data with contextual knowledge.
The development of IF systems inclusive of contextual factors and information offers an opportunity to improve the quality of the fused output, provide solutions adapted to the application requirements, and enhance tailored responses to user queries. Contextual-based strategy challenges include selecting the appropriate representations, exploitations, and instantiations. Context could be represented as knowledge-bases, ontologies, and geographical maps, etc. and would form a powerful tool to favor adaptability and system performance. Example applications include context-aided tracking and classification, situational reasoning, ontology building and updating. Therefore, the session covers both representation and exploitation mechanisms so that contextual knowledge can be efficiently integrated in the fusion process and enable adaptation mechanisms.
Topics include but are not limited to:
- Representation and exploitation of contextual information
- Managing of heterogeneous contextual sources (hard and soft data and knowledge)
- Injection of a priori knowledge to improve the performance of fusion systems
- Augmentation of tracking, classification, recognition, reasoning, situation analysis, etc. algorithms with contextual information
- Adaptation techniques to have the system respond not only to changing target’s state but also to the surrounding environment
- Strategies and algorithms for context discovery (off-line ad on-line)
- Application examples including context-aided surveillance systems (security/defense), traffic control, autonomous navigation, cyber security, ambient intelligence, ambient assistance, etc.
J. Marine Surface Situational Awareness
Organizers
- Dr. Edmund Brekke, Department of Engineering Cybernetics, Norwegian University of Technology and Science, Norway
- Dr. Paolo Braca, NATO S&T CMRE, La Spezia Italy
- Dr. Roberto Galeazzi, DTU Electrical Engineering, Technical University of Denmark
Abstract
Sensor fusion is a core component of autonomous vehicles, whether underwater, aerial, land-based or on the water surface. While autonomy has reached high a degree of maturity for aerial and underwater vehicles, autonomy on roads and on the water surface are currently very active research areas. While automotive autonomy to an increasing extent has been dominated by cameras and machine learning, the field of marine surface autonomy is to a larger extent making use of a variety of sensors, and model-based methods dominate in the research literature. Sensor fusion for marine vessels include detection, tracking, localization, classification and segmentation. More generally, sensor fusion enables autonomous vehicles to build automated situational awareness, and it can also support the situational awareness of their human operators. Situational awareness ranges beyond the immediate perception and interpretation of sensor data, to comprehension of what the data mean, and projection in order to plan for future events and outcomes.
Topics of this Special Session may include, but are not limited to
- Tracking of vessels on the water surface using exteroceptive sensors
- Simultaneous localization and mapping for surface vessels
- Prediction and classification of the motion of surface vessels
- Classification and scene recognition for exteroceptive sensor data on the water surface
- Sea state estimation and condition monitoring for surface vessels
K. Information Fusion for Situation Understanding and Sense-Making
Organizers
- Lauro Snidaro, Department of Mathematics and Computer Science, University of Udine, Italy
- Jesús García, GIAA – University Carlos III de Madrid, Spain
Abstract
The exploitation of all relevant information originating from a growing mass of heterogeneous sources, both device-based (sensors, video, etc.) and human-generated (text, voice, etc.), is a key factor for the production of a timely, comprehensive and most accurate description of a situation or phenomenon in order to make informed decisions. Even when exploiting multiple sources, most fusion systems are developed for combining just one type of data (e.g. positional data) in order to achieve a certain goal (e.g. accurate target tracking) without considering other relevant information (e.g. current situation status) from other abstraction levels.
The goal of seamlessly combining information from diverse sources including HUMINT, OSINT, and so on exists only in a few narrowly specialized and limited areas. In other words, there is no unified, holistic solution to this problem.
Processes at different levels generally work on data and information of different nature. For example, low level processes could deal with device-generated data (e.g. images, tracks, etc.) while high level processes might exploit human-generated knowledge (e.g. text, ontologies, etc.).
The overall objective is to enhance making sense of the information collected from multiple heterogeneous sources and processes with the goal of improved situational awareness and including topics such as sense-making of patterns of behavior, global interactions and information quality, integrating sources of data, information and contextual knowledge.
The proposed special session will bring together researchers working on fusion techniques and algorithms often considered to be different and disjoint. The objective is thus to foster the discussion of and proposals for viable solutions to address challenging problems in relevant applications.
L. Extended Object and Group Tracking
Organizers
- Tim Baur, Control Engineering Group, Institute of System Dynamics, Constance University of Applied Sciences, Germany
- Patrick Hoher, Control Engineering Group, Institute of System Dynamics, Constance University of Applied Sciences, Germany
- Marcus Baum, Data Fusion Lab, Institute of Computer Science, University of Goettingen, Germany
- Uwe D. Hanebeck, Intelligent Sensor-Actuator-Systems Lab (ISAS), Institute of Anthropomatics and Robotics, Karlsruhe Institute of Technology (KIT), Germany
- Johannes Reuter, Control Engineering Group, Institute of System Dynamics, Constance University of Applied Sciences, Germany
Abstract
Traditional object tracking algorithms assume that the target object can be modeled as a single point without a spatial extent. However, there are many scenarios in which this assumption is not justified. For example, when the resolution of the sensor device is higher than the spatial extent of the object, a varying number of measurements can be received, originating from points on the entire surface or contour or from spatially distributed reflection centers. Furthermore, a collectively moving group of point objects can be seen as a single extended object because of the interdependency of the group members.
This Special Session addresses fundamental techniques, recent developments, and future research directions in the field of extended object and group tracking. It has been organized annually at the FUSION conference since 2009 in Seattle.
Topics of this Special Session may include, but are not limited to
- Methodologies: Bayesian inference, nonlinear filtering, random sets, artificial neural networks, deep learning, track-to-track fusion, data association, learning of models
- Sensors: Radar, LiDAR, sonar, RGB cameras
- Applications: Navigation, surveillance, robotics, autonomous driving, maritime tracking, medicine, SLAM
- Case studies: Benchmark scenarios, performance measures, experiments, simulations
M. LAFUSION
Organizers
- Claudio M. de Farias, NCE/PESC Federal University of Rio de Janeiro Rio de Janeiro, Brazil
- Paulo Costa, George Mason University Washington, USA
Abstract
The aim of this special session is to extend the papers presented in the First Latin American Workshop on Information Fusion (LAFUSION 2023) – that focused on the latest research results on the Information Fusion in Latin America. The goal of this workshop was to create a community of Information Fusion researchers in Latin America that will be part of the FUSION community in the next years. Information fusion is a multidisciplinary field that focuses on combining and integrating information from diverse sources to improve the accuracy, completeness, and reliability of the resulting information. It involves the process of merging data or knowledge from multiple sensors, databases, or information systems to generate a unified and coherent representation of the underlying reality. The main goal of information fusion is to extract meaningful and actionable insights by leveraging the strengths of individual information sources while compensating for their limitations, uncertainties, or redundancies. It aims to provide a more comprehensive and accurate understanding of a given situation or phenomenon than what can be achieved by using individual sources in isolation.
Applications of information fusion are widespread and can be found in fields such as surveillance and intelligence, remote sensing, robotics, autonomous systems, medical diagnosis, weather forecasting, transportation systems, and cybersecurity. By integrating and interpreting information from multiple sources, information fusion enables improved situational awareness, decision-making, and prediction capabilities, leading to enhanced performance, efficiency, and reliability in complex and uncertain environments. Several Latin American problems could be solved by Information Fusion. We are looking to form a Forum to debate the usage of Information Fusion to produce solutions for the challenges in the region.
N. Multiagent Estimation
Organizers
- Luigi Chisci, Department of Information Engineering, Università degli Studi di Firenze, Italy
- Lin Gao, University of Electronic Science and Technology of China (UESTC), China
- Giorgio Battistelli, Department of Information Engineering, Università degli Studi di Firenze
Abstract
A multiagent estimator consists of multiple interacting agents which process the measurements of (possibly multiple) objects of interest so as to perform reasonable inference on the existence (number) of objects as well as on the state of each object. Specific examples of multiagent systems include netted radar, vehicles or drones with onboard sensors, mobile phones and many others. Driven by the rapid development on electronic, communication and network technologies, multiagent systems are almost everywhere in the modern society. Compared to single-agent systems, multiagent ones are able to provide additional benefits such as spatial diversity, broader coverage, enhanced observability and/or estimation performance. Moreover, some heterogeneous multiagent systems (e.g., consisting of microphones and cameras) can achieve stereo perception from different viewpoints. Not surprisingly, data processing over multiagent systems is by far more challenging than for singe-agent systems, and the related research hotspots include (but are not limited to) data spreading protocols, fusion, attack resilience, object matching among agents, etc.. The aim of this special session is to collect the newest models, algorithms, technologies, and results concerning estimation, both single-object and multi-object, with multiagent systems. We hope that this proposed special session can bring together the researchers who are involved in multiagent estimation to have heated discussion, and hopefully promote the development of this research field.
Contact
Special sessions:
specialsessions@fusion2024.org