FUSION 2024 has accepted 14 special sessions promoting a focused discussion of innovative topics in information fusion research.
Authors are invited to check whether their submissions topically fit into one of the proposed special sessions.
(please click for details)
Bayesian Neural Networks (BNNs) have garnered significant attention in recent years for their ability to provide a probabilistic framework that seamlessly integrates uncertainty into neural network predictions. Despite their impressive predictive capabilities, quantifying and assessing uncertainty in these nonlinear models poses challenges.
Consequently, both efficient, scalable approximations and new methods for verifying the trustworthiness of BNNs are needed to enable their reliable use in areas such as estimation and control.
This special session will cover a wide area of topics related to BNNs, including:
Cooperative localization systems constituted by a network of collaborative devices are gaining popularity in numerous application domains. Examples are in the fields of surveillance systems, wireless networks, IoT, automotive and more. According to the application domain, hardware availability and functionalities of the localized objects, cooperative systems lead to improved network localization and target tracking performances. Key feature is the ability to merge different types of information collected and/or generated at/by multiple devices. This implies the development of intelligent fusion strategies to coherently combine the individual observations of each single sensor. Such strategies can be embedded into centralized or distributed network architecture, where the fusion algorithm is located in a single processing center or it is distributed over the sensors themselves.
Over the past decades, the fusion community has focused mainly on improving and enhancing filtering and tracking algorithms. However, the online evaluation of existing fusion algorithms is at least as important as the development of new tracking algorithms. For example, in automated driving, the need for safety, reliability, and robustness has been identified and addressed in the ISO 21448 safety of the intended functionality (SOTIF)  standard. In this context, self-assessment or monitoring of fusion algorithms, in particular, is essential to ensure and fulfill the general safety requirements. Research in this area has been conducted under various names, such as monitoring, self-assessment, maneuvering targets, or noise estimation. These topics can be viewed as an approach to the online evaluation of filtering algorithms, which can potentially be used to perform an online adaptation process. This special session aims to bring together researchers in this area to discuss these highly topical challenges regarding safety and robustness, which are among the key issues of our time.
 International Organization for Standardization. ISO/PAS 21448: Road Vehicles –— Safety ofthe Intended Functionality.ISO, Publicly Available Specification, 2019.
The ETURWG special session started in Fusion 2010 and has been held every year, with an attendance consistently averaging between 30 and 50 attendees. While most attendees consist of ETURWG participants, new researchers and practitioners interested in uncertainty evaluation have attended the sessions and some stayed with the ETURWG. The 2024 ETUR special session will focus on the concept of Information Fusion Theory and Experimentation for Decision-Making under Uncertainty, and its connections with uncertainty representation and reasoning within the Information Fusion context. Topics of discussion will include:
The discussion will not be limited to specific approaches and can cover a wide range of applications.
The 2024 ETUR special session will focus on exploring the different ways in which use case-based evaluation and experimentation can help decision-making, as well as uncertainty representation and reasoning within the Information Fusion context. This includes related work on machine learning, explainability, hybrid systems, human-machine teaming, automated vehicles, cognitive security, as well as other advanced knowledge representation and reasoning techniques. The impact to the ISIF community will be an organized session with a series of methods in uncertainty representation as coordinated with evaluation. The techniques discussed and questions/answers will be important for the researchers in the ISIF community; however, the bigger impact will be for the customers of information fusion systems to determine how to measure, evaluate, and approve systems that assess the situation beyond Level 1 fusion.
The customers of information fusion products will have some guidelines to draft requirements documentation, the gain of fusion systems over current techniques, as well as issues that are important in information fusion systems designs. One of the main goals of information fusion is uncertainty reduction, which is dependent on the representation chosen. Uncertainty representation differs across the various levels of Information Fusion (as defined by the JDL/DFIG models). Given the advances in information fusion systems, there is a need to determine how to represent and evaluate situational (Level 2 Fusion), impact (Level 3 Fusion) and process refinement (Level 5 Fusion), which is not well standardized for the information fusion community.
Methods for Bayesian inference with nonlinear systems are of fundamental interest in the information fusion community. Great efforts have been made to develop state estimation methods that are getting closer and closer to the truth. Further objectives are to increase their efficiency, reduce their requirements / assumptions, and to allow their application in more general settings. Areas such as target tracking, guidance, positioning, navigation, sensor fusion, fault detection, and decision-making usually require the use of linear or nonlinear state estimation methods (i.e., of broad interest for the information fusion community).
These methods are used to provide a state estimate of a dynamic system, which is in general not directly measurable, from a set of noisy measurements. The development of state estimation started in the sixties with the appearance of the well-known Kalman filter (KF), and the use of simple linearization approaches to deal with nonlinear dynamic systems. Satisfactory performance of these legacy KF-based methods was limited to system models with mild nonlinearities, together with a perfect knowledge of the system, that is, both system functions, noise statistics distributions and their respective parameters.
For the last three decades, a huge effort has gone towards the derivation of
This special session focuses on recent advances in nonlinear state estimation (filters, smoothers, and predictors) for both discrete and continuous time system models with areas such as:
The Stone Soup framework is a flexible, modular, open-source framework for developing and proving a wide variety of tracking and information-fusion-based solutions. Since its inception in 2017, it has aimed to provide the target tracking and state estimation community with an open, easy-to-deploy framework to develop and assess the performance of different types of trackers. Now, through repeated application in many use cases, implementation of a wide variety of algorithms, multiple beta releases, and contributions from the community, the framework has reached a stable point and is proving to be an essential tool in evaluation and characterization of tracking and state estimation approaches.
This special session highlights recent research contributions within the Stone Soup framework and emphasizes the evaluation and comparison capabilities. Discussions in this session will typically draw upon Stone Soup’s evaluation features to include comprehensive evaluation of a proposed approach against a number of other approaches in a number of use cases.
The main demand on AI from the responsible data engineering bodies is to provide explainable AI. The open questions also on the difference between Trustworthy AI and Explainable AI. However, there is consensus that the present processing-based AI, in general, is neither Trustworthy nor Explainable. In this special session, the above question will be focused on GEO-located Multimodal AIS Data and Multimodal ADHD Data driven Explainable AI. The key target of this special session is to present interdisciplinary (mainly, engineering, medical and defence industry led) collaborative data and AI research approaches that require professionals from different domains to exchange knowledge. In particular, interdisciplinary research on ADHD mental health performed with the CNTW-NHS Foundation Trust will be presented. (This trust is one of the largest mental health and disability Trusts in England employing more than 7,000 staff, serving a population of approximately 1.7 million, and providing services across an area totaling 4,800 square miles. It works from over 70 sites across Cumbria, Northumberland, Newcastle, North Tyneside, Gateshead, South Tyneside and Sunderland UK.) This special session will call for theoretical and practical works in healthcare and security applications. The special session will be also open to everybody working in the Security and Surveillance, and Healthcare Technologies.
Development of modern navigation and timing algorithms is closely tied with the advent of state estimation and data fusion methods. State estimation methods provide a valuable tool to infer time-varying and unknown navigation quantities (such as position, velocity, or attitude and heading of a moving object) or timing information from a set of indirectly related and noisy measurements and a priori information on the dynamics of the object, all of which are tied together through a state-space model formulation. Data fusion methods can be seen as a further extension of state estimation methods, where multiple estimates or sources of information are merged together to get a “global” estimate with superior performance.
Navigation and timing algorithms are core components in a wide range of applications and devices of today’s society including (autonomous) transportation, wearables, robotics, or financial and power distribution services, to name a few. As such, the current and envisioned navigation algorithms are required to process measurements from a broad variety of heterogeneous sensors (including technologies such as inertial sensors, satellite navigation, signals of opportunity, altimeters, LiDARs, star trackers, or terrain and other maps) to provide high-quality navigation and timing information with required levels of accuracy, integrity, availability, and continuity. To fulfil stringent requirements on the navigation and timing information in challenging environments, novel state estimation, data fusion, fault detection, and system identification methods nonlinear/non-Gaussian models shall be designed and employed. In parallel, methods should be kept computationally feasible in order to process (nearly optimally) all the available information in real-time. In this context, advanced state estimation methods can take advantage of the recent developments in the area of the machine learning (ML) and artificial intelligence.
This special session focuses on recent advances and envisioned directions of state estimation, data fusion, fault detection, and system identification and modelling, as used in the design of novel navigation and timing algorithms. In particular, the session focuses on
The goal of the proposed session is discussing approaches to context-based information fusion. It will cover the design and development of information fusion solutions integrating sensor data with contextual knowledge.
The development of IF systems inclusive of contextual factors and information offers an opportunity to improve the quality of the fused output, provide solutions adapted to the application requirements, and enhance tailored responses to user queries. Contextual-based strategy challenges include selecting the appropriate representations, exploitations, and instantiations. Context could be represented as knowledge-bases, ontologies, and geographical maps, etc. and would form a powerful tool to favor adaptability and system performance. Example applications include context-aided tracking and classification, situational reasoning, ontology building and updating. Therefore, the session covers both representation and exploitation mechanisms so that contextual knowledge can be efficiently integrated in the fusion process and enable adaptation mechanisms.
Topics include but are not limited to:
Sensor fusion is a core component of autonomous vehicles, whether underwater, aerial, land-based or on the water surface. While autonomy has reached high a degree of maturity for aerial and underwater vehicles, autonomy on roads and on the water surface are currently very active research areas. While automotive autonomy to an increasing extent has been dominated by cameras and machine learning, the field of marine surface autonomy is to a larger extent making use of a variety of sensors, and model-based methods dominate in the research literature. Sensor fusion for marine vessels include detection, tracking, localization, classification and segmentation. More generally, sensor fusion enables autonomous vehicles to build automated situational awareness, and it can also support the situational awareness of their human operators. Situational awareness ranges beyond the immediate perception and interpretation of sensor data, to comprehension of what the data mean, and projection in order to plan for future events and outcomes.
Topics of this Special Session may include, but are not limited to
The exploitation of all relevant information originating from a growing mass of heterogeneous sources, both device-based (sensors, video, etc.) and human-generated (text, voice, etc.), is a key factor for the production of a timely, comprehensive and most accurate description of a situation or phenomenon in order to make informed decisions. Even when exploiting multiple sources, most fusion systems are developed for combining just one type of data (e.g. positional data) in order to achieve a certain goal (e.g. accurate target tracking) without considering other relevant information (e.g. current situation status) from other abstraction levels.
The goal of seamlessly combining information from diverse sources including HUMINT, OSINT, and so on exists only in a few narrowly specialized and limited areas. In other words, there is no unified, holistic solution to this problem.
Processes at different levels generally work on data and information of different nature. For example, low level processes could deal with device-generated data (e.g. images, tracks, etc.) while high level processes might exploit human-generated knowledge (e.g. text, ontologies, etc.).
The overall objective is to enhance making sense of the information collected from multiple heterogeneous sources and processes with the goal of improved situational awareness and including topics such as sense-making of patterns of behavior, global interactions and information quality, integrating sources of data, information and contextual knowledge.
The proposed special session will bring together researchers working on fusion techniques and algorithms often considered to be different and disjoint. The objective is thus to foster the discussion of and proposals for viable solutions to address challenging problems in relevant applications.
Traditional object tracking algorithms assume that the target object can be modeled as a single point without a spatial extent. However, there are many scenarios in which this assumption is not justified. For example, when the resolution of the sensor device is higher than the spatial extent of the object, a varying number of measurements can be received, originating from points on the entire surface or contour or from spatially distributed reflection centers. Furthermore, a collectively moving group of point objects can be seen as a single extended object because of the interdependency of the group members.
This Special Session addresses fundamental techniques, recent developments, and future research directions in the field of extended object and group tracking. It has been organized annually at the FUSION conference since 2009 in Seattle.
The aim of this special session is to extend the papers presented in the First Latin American Workshop on Information Fusion (LAFUSION 2023) – that focused on the latest research results on the Information Fusion in Latin America. The goal of this workshop was to create a community of Information Fusion researchers in Latin America that will be part of the FUSION community in the next years. Information fusion is a multidisciplinary field that focuses on combining and integrating information from diverse sources to improve the accuracy, completeness, and reliability of the resulting information. It involves the process of merging data or knowledge from multiple sensors, databases, or information systems to generate a unified and coherent representation of the underlying reality. The main goal of information fusion is to extract meaningful and actionable insights by leveraging the strengths of individual information sources while compensating for their limitations, uncertainties, or redundancies. It aims to provide a more comprehensive and accurate understanding of a given situation or phenomenon than what can be achieved by using individual sources in isolation.
Applications of information fusion are widespread and can be found in fields such as surveillance and intelligence, remote sensing, robotics, autonomous systems, medical diagnosis, weather forecasting, transportation systems, and cybersecurity. By integrating and interpreting information from multiple sources, information fusion enables improved situational awareness, decision-making, and prediction capabilities, leading to enhanced performance, efficiency, and reliability in complex and uncertain environments. Several Latin American problems could be solved by Information Fusion. We are looking to form a Forum to debate the usage of Information Fusion to produce solutions for the challenges in the region.
A multiagent estimator consists of multiple interacting agents which process the measurements of (possibly multiple) objects of interest so as to perform reasonable inference on the existence (number) of objects as well as on the state of each object. Specific examples of multiagent systems include netted radar, vehicles or drones with onboard sensors, mobile phones and many others. Driven by the rapid development on electronic, communication and network technologies, multiagent systems are almost everywhere in the modern society. Compared to single-agent systems, multiagent ones are able to provide additional benefits such as spatial diversity, broader coverage, enhanced observability and/or estimation performance. Moreover, some heterogeneous multiagent systems (e.g., consisting of microphones and cameras) can achieve stereo perception from different viewpoints. Not surprisingly, data processing over multiagent systems is by far more challenging than for singe-agent systems, and the related research hotspots include (but are not limited to) data spreading protocols, fusion, attack resilience, object matching among agents, etc.. The aim of this special session is to collect the newest models, algorithms, technologies, and results concerning estimation, both single-object and multi-object, with multiagent systems. We hope that this proposed special session can bring together the researchers who are involved in multiagent estimation to have heated discussion, and hopefully promote the development of this research field.