FUSION 2024 is offering 15 half-day tutorials and 3 full-day tutorials, which will provide focused introductions into specific areas of Information Fusion.
(please click for details)
Half-day Tutorials – Morning
Half-day Tutorials – Afternoon
Full-day Tutorials
Tutorial Descriptions
Full-day Tutorials
Practical Multi-target Tracking and Sensor Management with Stone Soup
Instructors
- Lyudmil Vladimirov (University of Liverpool, UK)
- Steven Hiscocks (Defence Science and Technology Laboratory, UK)
- James Wright (Defence Science and Technology Laboratory, UK)
- Nicola Perree (Defence Science and Technology Laboratory, UK)
Abstract
The Stone Soup framework is designed to provide a flexible and unified software platform for researchers and engineers to develop, test and benchmark a variety of existing multi-sensor, multi-object estimation algorithms and sensor management approaches. It profits from the object-oriented principles of abstraction, encapsulation and modularity, allowing users (beginners, practitioners or experts) to focus only on the most critical aspects of their problem.
Stone Soup is endorsed by ISIF’s working group on Open Source Tracking and Estimation (OSTEWG).
These tutorials will introduce participants to Stone Soup’s basic components and how they fit together. They are delivered by way of demonstrations, set tasks and interactive tutorials where participants will be encouraged to write and modify algorithms. These tasks will be written up in the form of interactive browser-based applications which combine the ability to run code with a presentation environment suitable for instruction.
The tutorial will begin with basic examples using linear/non-linear models, filtering, data association, track management, aimed at briefly introducing the topics and familiarise attendees with Stone Soup. The later part of the tutorial is an interactive activity, applying Stone Soup to several scenarios involving simulated and real-world datasets.
Contents
Introduction to Stone Soup
- Single Target Tracking
- Multi Target Tracking
- Tracking Practicalities
Application of Stone Soup
- Video Processing
- Drone Tracking
- Open Data
- Sensor Management
- Classification and tracking
- Bayesian Search
Intended Audience
The tutorial will be suitable for students new to statistical inference and estimation; recent graduates in the mathematical sciences moving into tracking and state estimation; practitioners in industry and government with an interest in algorithm comparison and applications; academic researchers for whom robust baselining is necessary to demonstrate the efficacy of their work.
The will consist of primarily hands on practical sessions with simulated/recorded data sets.
Tutorial: High-Level Information Exploitation
Instructors
- Alan Steinberg
Abstract
This tutorial presents important new developments in the use of situational and scenario information in managing complex operations under uncertainty. These concepts enable a structured systematic approach both a) for human-centered practitioners to represent and evaluate biological methods in acquiring and applying knowledge and b) for systems and AI engineers to build effective systems for the same.
Improved context-sensitive exploitation of diverse information is achievable by deeper understanding of the concepts involved in representing, recognizing, and predicting relationships, situations, contexts, courses of actions, interactions, and outcomes.
Current concepts involved in lower and higher-level data fusion and resource management are presented, as applicable both in systems engineering and in modeling biological information acquisition and use. Methods are discussed for achieving synergy across the levels through a common ontology and architecture for uncertainty management. A unified model framework is defined for representing one’s own courses of action (CoA) for purposes of planning and performance assessment as well as CoAs of external entities, for use in scenario and outcome prediction or forensic analysis. Finite-state and continuous-state CoA models are discussed, with necessary and sufficient conditions for state transitions modeled in terms of capability, opportunity, and intent for such transitions over time, employing a utility/probability/cost calculus. We discuss a reference architecture for closed-loop situation/scenario management under uncertainty; whereby level 3 mission management and level 3 fusion processes iteratively plan, evaluate, and execute courses of action using machine learning and game-theoretic methods. Application examples include traditional and asymmetric warfare, involving both machine and human intelligence.
Contents
Data Fusion and Information Exploitation: biological and artificial (15 min)
Levels of Data Fusion and Management (20 min)
Ontological foundations (35 min)
Synergy across the levels (20 min)
Reference Architecture for Multi-level Information Exploitation (45 min)
Level 2 Data Fusion: Relationship and Situation Assessment (45 min)
Course of Action modeling, recognition, and prediction (40 min)
Level 3 Data Fusion: Course of Action and Scenario Assessment (50 min)
Closed-loop Information Exploitation (50 min)
Applications and Approaches (40 min)
Intended Audience
Information system architects, systems engineers, and software developers will find this tutorial to be very useful in designing, developing, testing, and evaluating context-sensitive information exploitation systems.
This tutorial is recommended for practitioners in Cognitive Sciences and Artificial Intelligence/ Machine Learning to gain insight into current concepts in Multi-Level Data Fusion as a structured approach for characterizing, designing, and evaluating processes for closed loop planning and response under uncertainty.
Graph-Based Localization, Tracking, and Mapping
Instructors
- Erik Leitinger
- Florian Meyer
Abstract
Localization and tracking are increasingly important in emerging applications, including autonomous navigation, applied ocean sciences, asset tracking, future communication networks, and the internet of things. These applications pose new theoretical and methodological challenges to information fusion due to heterogeneous sensors. Processing measurements is often complicated by uncertainties beyond Gaussian noise like missed detections and clutter, an uncertain origin of measurements, and an unknown and time-varying number of objects to be localized or tracked.
Methodologically, these challenges can be well addressed by inference that leverages graphical models. The graph-based inference approach has important advantages regarding performance, scalability, versatility, and implementation flexibility. It provides a powerful theoretical framework and a rich set of tools for modeling and exploiting the statistical structure of an inference problem. An inherent advantage of graph-based inference is that it can provide scalable solutions to high-dimensional problems. It also introduces lucidity and modularity into algorithm design since different functional units of the overall problem appear as distinct parts in the graph. Due to these desirable properties, new graph-based modeling and inference techniques are advancing the field of localization and tracking.
Contents
- Probabilistic Graphical Models and Their Properties
- Factor Graphs and the Message Passing Algorithms
- Graph-Based Probabilistic Data Association
- Graph-Based Multiobject Tracking
- Graph-Based Simultaneous Localization and Mapping Based on Radio Signals
Intended Audience
The intended audience is graduate students or postdocs with a background in engineering. Recommended prerequisites for this tutorial are probability theory, statistical signal processing, linear algebra, and sequential state-space filtering.
Half-day Tutorials – Morning
Multiagent and Multiobject Estimation
Instructors
- Luigi Chisci
- Alfonso Farina
- Lin Gao
- Giorgio Battistelli
Abstract
The course will provide an overview of advanced research in estimation, specifically concerning the two topics of multiagent and multiobject estimation. Multiagent estimation deals with a network of agents with sensing, processing and communication capabilities that aim to cooperatively monitor a given system of interest. Multiobject estimation aims to detect an unknown number of objects present in a given area and estimate their states. Special attention will be devoted to the fusion of possibly correlated information from multiple agents and on the random-finite-set paradigm for the statistical representation of multiple objects.
Applications to distributed cooperative surveillance, monitoring and navigation tasks will be discussed.
Contents
Recalls on Bayesian filtering.
Architectural topologies for sensor fusion.
Fusion of radar and ESM (Electronic Support Measurement) tracks.
Network modeling and Bayesian approach to multi-agent estimation.
Kullback-Leibler fusion and its properties.
Scalable fusion via consensus.
Distributed Kalman filtering with guaranteed stability.
Event-triggered communication for enhanced efficiency.
Random-finite-set (RFS) modeling of multi-objects.
Multi-object filtering.
Multi-object fusion.
Applications to multi-target tracking, simultaneous localization and mapping (SLAM), source
detection and localization.
Intended Audience
researchers in academia, government agencies and industrial companies
with university background in STEM (Science, Technology, Engineering and Mathematics)
An Introduction to Track-to-Track Fusion and the Distributed Kalman Filter
Instructors
- Felix Govaers
Abstract
The increasing trend towards connected sensors (“internet of things” and “ubiquitous computing”) derive a demand for powerful distributed estimation methodologies. In tracking applications, the “Distributed Kalman Filter” (DKF) provides an optimal solution under certain conditions. The optimal solution in terms of the estimation accuracy is also achieved by a centralized fusion algorithm which receives either all associated measurements or so-called “tracklets”. However, this scheme needs the result of each update step for the optimal solution whereas the DKF works at arbitrary communication rates since the calculation is completely distributed. Two more recent methodologies are based on the “Accumulated State Densities” (ASD) which augment the states from multiple time instants. In practical applications, tracklet fusion based on the equivalent measurement often achieves reliable results even if full communication is not available. The limitations and robustness of the tracklet fusion will be discussed. At first, the tutorial will explain the origin of the challenges in distributed tracking. Then, possible solutions to them are derived and illuminated. In particular, algorithms will be provided for each presented solution.
The list of topics includes: Short introduction to target tracking, Tracklet Fusion, Exact Fusion with cross-covariances, Naive Fusion, Federated Fusion, Decentralized Fusion (Consensus Kalman Filter), Distributed Kalman Filter (DKF), Debiasing for the DKF, Distributed ASD Fusion, Augmented State Tracklet Fusion.
Intended Audience
The intended audience are engineers, PhD students, or interested people who are working in the field of distributed sensor data fusion. The algorithmic and theoretical background of track-to-track fusion, tracklet fusion, and the distributed Kalman filter should be of interest for the audience. Problems, questions and specific interests are welcome for an open discussion.
Introduction to Machine Learning Generalization Theory with Information Fusion Applications
Instructors
- Nageswara Rao, Oak Ridge National Laboratory, USA
Abstract
The overall theme of the tutorial is provide a rigorous foundational knowledge for developing and/or applying ML solutions, based on the generalization theory that rigorously captures the performance beyond the training which is subject to over-fitting and hallucinations. In addition, this theory is applied to information fusion problems involving multiple sources such as sensors and estimators.
The application specific properties, such as smoothness of thermal-hydraulic equations and bounded variation of data transfers throughput profiles, will be used to design ML solutions together with their generalization equations.
Contents
- Introduction and Brief History – 30 min
- Generalization Theory Basics
– Statistical independence and Bayesian methods – 30 min
– Vapnik’s finite sample theory – 30 min
- Generalization Theory of ML Fusers
– Generic multiple sensor fusion – 30 min
– Fusion of multiple classifiers and regressions – 30 min
- Applications
– Sensor drift estimation – 30 min
∗ multiple sensor fusion solution
∗ generalization equations: smoothness of thermal-hydraulic equations
– Data transport infrastructure – 30 min
∗ fusion of multiple estimators
∗ generalization equations: non-smooth, bounded total variation of throughput profiles
– Other Applications Summary – 30 min
Intended Audience
The intended audience are researchers, engineers, and students interested in the application of machine learning (ML) methods to engineering problems with rigorous characterization of their generalization, specifically their performance beyond the training. This tutorial is specifically tailored to avoid the pitfalls of relying solely on training performance of ML methods as an indicator of their performance on future data, by utilizing ML generalization theory. Specifically, the generalizations equations of ML methods will used to mitigate artifacts such as over-fitting, hallucination and LLM-specificity that are becoming increasingly common in practical applications due to the proliferation of ready-to-use, opaque ML codes and frameworks.
Estimation of Noise Parameters in State Space Models
Instructors
- Ondrej Straka
- Jindrich Dunik
Abstract
Knowledge of a system model is a crucial prerequisite for many state estimation, signal processing, fault detection, and optimal control problems. The model is often designed to be consistent with the random behavior of the system quantities and properties of the measurements. While the deterministic part of the model often arises from mathematical modeling based on physical, chemical, or biological laws governing the system’s behavior, the stochastic part’s statistics are often challenging to find by the modeling and have to be identified using the measured data. Incorrect description of the noise statistics may result in a significant worsening of estimation, signal processing, detection, control quality, or even failure of the underlying algorithms.
The tutorial introduces a more than six decades-long history as well as recent advances and state-of-the-art methods for estimating the properties of the stochastic part of the model. In particular, the estimation of state-space model noise means, covariance matrices, and other parameters are treated. The tutorial covers all major groups of noise statistics estimation methods, including correlation methods, maximum likelihood methods, covariance matching methods, and Bayesian methods. The methods are introduced in the unified framework highlighting their basic ideas, key properties, and assumptions. Algorithms of individual methods will be described and analyzed to provide a basic understanding of their nature and similarities. The performance of the methods will also be compared using a numerical illustration.
Contents
Part I: Introduction, Motivation, and Basic Design Procedures (40 min)
Part II: Correlation Methods (40 min)
Part III: Maximum Likelihood, Covariance Matching, and Bayesian Methods (40 min)
Part IV: Numerical Comparison and Illustration (40 min)
Part V: Estimation of Means and Parameters (40 min)
Part VI: Showcase of noise parameter application in real-world problems (20 min)
Part VII: Implementation, interactive demonstration of the provided code (30 min)
Intended Audience
Researchers in academia and industry, engineers, and graduate students. A similar tutorial has been organized during the FUSION conferences since 2017 and was attended by participants in all career stages working in technical areas such as state estimation, target tracking and navigation, decision making, stochastic systems, and system identification, but also in non-technical areas.
Context-enhanced Information Fusion
Instructors
- Erik Blasch
- Lauro Snidaro
Material prepared in collaboration with: Jesus Garcia, James Llinas
Abstract
Contextual Information (CI) can be understood as the information that “surrounds” an observable of interest. Even if not directly part of the problem variables being estimated by the system, CI can influence their state or even the sensing and estimation processes themselves. Therefore, understanding and exploiting CI can be a key element for improving the performance of Information Fusion algorithms and automatic systems in general that have to deal with varying operating conditions. There is a growing interest for this promising research topic that should be considered for developing next-generation Information Fusion systems.
Context can have static or dynamic structure, and be represented in many different ways such as maps, knowledge-bases, ontologies, etc. It can constitute a powerful tool to favour adaptability and boost system performance. Application examples include: context-aided surveillance systems (security/defence), traffic control, autonomous navigation, cyber security, ambient intelligence, ambient assistance, etc.
The purpose of this tutorial is to survey existing approaches for context-enhanced information fusion, covering the design and development of information fusion solutions integrating sensory data with contextual knowledge. After discussing CI in other domains, the tutorial will focus on context representation and exploitation aspects for Information Fusion systems. The applicability of the presented approaches will be illustrated with real-world context-aware Information Fusion applications.
Contents
1: Representation and exploitation of contextual information at different levels of an Information Fusion system
2: Managing of heterogeneous contextual sources Adaptation techniques to have the system respond not only to changing target’s state but also to the surrounding environment
3: Architectural issues and possible solutions
4: Applications;: Augmentation of tracking, classification, recognition, situation analysis, etc. algorithms with contextual information
Intended Audience
This tutorial will be valuable for researchers, developers, and practitioners, while primarily intended for:
– Researchers in basic science exploring high-level information fusion theory and applied applications towards demonstrations in laboratory simulations and operational field studies for situation awareness.
– System engineers and developers of information fusion and command and control systems who are required to specify, develop, integrate, test, and evaluate high-level information fusion capabilities.
– Technical managers who oversee data and command and control developments; for these managers the tutorial will serve as a valuable technical discussion on the terminology, concepts, and implementation challenges of high-level information fusion.
– Graduate level students studying advanced information fusion theory, representations, techniques, and technologies.
Multi Sensor and Data Fusion Approaches for Vehicular Automation Applications -Autonomous Driving: Concepts, Implementations and Evaluation
Instructors
- Bharanidhar Duraisamy
- Ting Yuan
- Tilo Schwarz
- Martin Fritzsche
Abstract
This tutorial is prepared to bring an introduction, discussion and better understanding on the topics:
– Sensor fusion levels and architectures for autonomous vehicles
– Different environment perception data and representation
– Objects, Grids and Raw Data oriented sensor fusion problems
– Target signatures from various environment perception sensors
– Simulation and its various applications in autonomous driving domain
– Discussion on usage of ground truth reference techniques for various modelling problems
– Impact of Artificial Intelligence in the operational tool-chain of sensor fusion
– Nitty-gritty details that plays a vital role in real life sensor fusion applications
– Some of the concepts used in the recent L3 automated system 2022 would be discussed
This tutorial is focused towards the stringent requirements, foundations, development and testing of sensor fusion algorithms meant for advanced driver assistance functions, self-driving car applications in automotive vehicle systems and vehicular infrastructure oriented sensor fusion applications . The audience would be provided with the presentation materials used in the tutorial. Some of the concepts used in the recent L3 automated system 2022 would be discussed. The complex sensor world of autonomous vehicles is discussed in detail and different aspects of sensor fusion problem related to this area is taken as one of the core subject of this tutorial. In addition a special discussion section on a sensor fusion system that is designed to work on the data obtained from environment perception sensors placed in an infrastructure such as a parking house, is presented.
Intended Audience
The audience can see the different representations of the surrounding environment as perceived by the heterogeneous environment perception sensors e.g. different kinds of radar (multi-mode radar, short range radar), stereo camera and lidar. The relevant state estimation algorithms, sensor fusion frameworks and the evaluation procedures with reference ground truth are presented in detail. The audience can get a first ever glimpse of the data set obtained from a sensor configuration that would be used in the future Mercedes Benz autonomous vehicles. Different target signatures obtained for various types of targets under different sensory conditions would be presented. An important tool for development, the simulation software that also helps in evaluating a concept or production software, verifying and validating different sensor models would be discussed in detail.
Scalable-AI (CANCELLED)
Instructors
- Andrea Pilzer
Abstract
With the breakthrough of Large Language Models (LLM) we are observing unprecedented performance of deep learning algorithms, now extending to multi-modality such as text, image and video. These models are trained on High Performance Computing (HPC) clusters equipped with thousands of GPUs, rendering deep learning no longer a niche research domain but a computational science demanding significant compute power. Based on this realization, the idea of this tutorial is to give researchers more tools to improve their models and exploit at the maximum the hardware that is typically used to train models (GPUs).
Why is this tutorial relevant for the Fusion research community? The Fusion conference has historically focused on signal processing among other topics, making it relevant for the adoption of deep learning techniques. Deep learning models prove especially valuable when traditional methods struggle to handle complex or abundant data. On one hand, there are several problems addressed in the Fusion conference that could benefit from deep learning, for example automatically learning information fusion (e.g. multimodal transformers) or ”classic” tasks like localization, tracking and classification.
On the other hand, the Fusion community has a lot of theoretical knowledge, that could be very useful to improve the understanding of the behaviour of deep learning models (e.g. uncertainty estimation, theoretical analysis).
Contents
Welcome, introduction (10 min)
Parallelization techniques (Data, Model, Pipeline Parallelism) (60 min)
Dataloaders (30 min)
Mixed precision training (30 min)
Optuna (30 min)
DeepSp (30 min)
Intended Audience
This tutorial does not have specific requirements, will cover the basics of high performance deep learning. It is particularly useful for early career scientists that on average (there are exceptions) code much more than senior scientists. The focus is on giving practical examples and advising on how to spend less time experimenting and more thinking about research problems. Attendees with some deep learning background and Python experience will find it easier to follow.
Half-day Tutorials – Afternoon
Data Fusion for TinyML
Instructors
- Claudio M. de Farias
Abstract
The Internet of Things (IoT) is a novel paradigm that is grounded on Information and Communication Technologies (ICT). Recently, the use of IoT has been gaining attraction in areas such as logistics, manufacturing, retailing, and pharmaceutics, transforming the typical industrial spaces into Smart Spaces.
Traditional machine learning algorithms may not be suited for resource-constrained scenarios. TinyML emerges as a viable solution by optimizing ML algorithms and models for efficiency and deploying them directly on microcontrollers or other lowpower processors, TinyML enables on-device inference without relying on cloud-based servers. For tinyml, the employment of Data fusion techniques is useful to further compress models, combine data sources, clean data reducing decisions response times, and enabling more intelligent and immediate situation awareness. This tutorial aims to show the tinyml algorithms in the multisensor data fusion context, both theoretically and in practice.
Contents
1 – The Smart Spaces scenario including its emergence, advantages and challenges. We will show how the industry has moved towards the use of Machine Learning and Internet of Things; (30 min)
2 – Discuss how traditional ML methods can be limited in some potential scenarios – discussion about decision conflicts and synergy. (15 min)
3 – Show the emergence of TinyML paradigm. (15 min)
4 – How Multisensor Data Fusion methods can be used to aid TinyML; (30 min)
5 – Discuss Data Fusion can aid in the decision, prunning, model compression and anomaly detection; (45 min)
6 – Discuss Interpretable AI, Federated Learning and HyperIntelligence as drivers to Smart Spaces; (30 min)
7 – Several case studies including Systems Security, Smart Factories, Structural Health Monitoring, Smart Health, Smart farms, Smart buildings, Smart Vehicles and Smart Grids; (30 min)
8 – An example produced using FreeRTOS, Micropython and tensorflow-lite-micro and a simulation example considering agricultural industry; (30 min)
9 – Future research directions in this area; (15 min)
Intended Audience
People who can benefit from this tutorial are e.g. researchers, system designers and developers from industry and academia working in the following areas: Information fusion in general, Machine Learning, Internet of Things and decision support systems
Poisson Multi-Bernoulli Mixtures for Multiple Target Tracking
Instructors
- Ángel García-Fernández
- Yuxuan Xia
Material prepared in collaboration with: Lennart Svensson, Karl Granström.
Abstract
In this tutorial, the attendant will learn the foundations of the Poisson multi-Bernoulli mixture (PMBM) filter, a state-of-the-art multiple target tracking (MTT) algorithm that has been applied to data from lidars, radars, cameras, integrated search-and-track sensor management and 5-G simultaneous localization and mapping. In addition, the attendant will learn the relations of the PMBM filter with other MTT algorithms such as multi-Bernoulli mixture (MBM) filter, probability hypothesis density (PHD) filter, Poisson multi-Bernoulli (PMB) filter, δ -generalised labelled multi-Bernoulli (δ -GLMB) filter, multiple hypothesis tracking (MHT), and joint integrated probabilistic data association (JIPDA) filter. Finally, this tutorial will cover the extension of the PMBM filter to sets of trajectories to include full trajectory information.
Contents
– Basic notions of random finite sets: multi-target density, cardinality distribution, PHD, convolution formula.
– Basic types of random finite sets: Poisson, Bernoulli, multi-Bernoulli.
– PMBM filtering: overview of its structure, prediction and update.
– The MBM filter: a special case of PMBM filtering.
– Relation between PMBM/MBM filters and the δ -GLMB filter, including adaptive birth.
– Relation to approximate filters: PMB and PHD filters.
– PMB filters obtained via belief propagation.
– Relation between PMBM/PMB filters with classic MTT approaches: MHT, and JIPDA.
– Extension of the PMBM filter to sets of trajectories to obtain full trajectory information.
Implementations of the most relevant algorithms and slides will also be provided. This tutorial is complemented by the edX/YouTube course on multiple object tracking, where comprehensive material and exercises are provided.
Intended Audience
The intended audience are researchers with previous knowledge in single target tracking and Kalman filtering, for example, PhD students, researchers working in the industry and academics. Basic understanding of random finite sets will also be helpful.
Quantum Computing and Quantum Physics Inspired Algorithms: Introduction and Data Fusion Examples
Instructors
- Felix Govaers
- Martin Ulmke
- Wolfgang Koch
Abstract
Quantum algorithms for data fusion may become game changers as soon as quantum processing kernels embedded in hybrid processing architectures with classical processors will exist. While emerging quantum technologies directly apply quantum physics, quantum algorithms do not exploit quantum physical phenomena as such, but rather use the sophisticated framework of quantum physics to deal with “uncertainty”. Although the link between mathematical statistics and quantum physics has long been known, the potential of physics-inspired algorithms for data fusion has just begun to be realized. While the implementation of quantum algorithms is to be considered on classical as well as on quantum computers, the latter are anticipated as well-adapted “analog computers” for unprecedentedly fast solving data fusion and resources management problems. While the development of quantum computers cannot be taken for granted, their potential is nonetheless real and has to be considered by the international information fusion community.
Intended Audience
The intended audience are engineers, PhD students, or interested people who are working in the field of data fusion and target tracking. Some basic background knowledge on quantum physics can help but is not required. The interest of the audience should be in both, quantum computing and quantum inspired algorithms for data fusion. Problems, questions and specific interests are welcome for an open discussion.
Multitarget Tracking and Multisensor Information Fusion: Recently Developed Advanced Algorithms
Instructors
- Yaakov Bar-Shalom
Abstract
This tutorial will provide to the participants several of the latest state-of-the art advanced algorithms to estimate the states of multiple targets in clutter and multisensor information fusion. These form the basis of automated decision systems for advanced surveillance and targeting.
Advanced algorithms, including the track-to-track fusion from heterogeneous sensors and the cross- covariance between different dimension state space estimators based on the \mapped process noise” between active and passive sensors are discussed. Optimal measurement extraction from optical sensors, together with their accuracies. The Maximum Likelihood Probabilistic Data Association algorithm for VLO targets is presented together with its application to real data, where it is shown to provide earlier track detection compared to the MHT.
Contents
Target tracking and data fusion: How to Get the Most Out of Your Sensors (and make a living out of this)
Heterogeneous and Asynchronous Information Matrix Fusion
Asynchronous and Heterogeneous Track-to-Track Fusion with Mapped Process Noise and Cross-Covariance
Measurement Extraction for a Point Target from an Optical Sensor
Acquisition of a 4 dB SNR TBM target with an ESA radar.
Intended Audience
Engineers/scientists with prior knowledge of basic probability and state estimation. This is an intensive course in order to cover several important recent advances and applications.
Selected Topics in Sequential Bayesian Estimation
Instructors
- Branko Ristic, RMIT University, Australia
Abstract
The tutorial will cover four advanced topics of sequential Bayesian estimation.
- Rao-Blackwellisation of particle filters. Particle filters are computationally expensive. In some applications it is possible to formulate an analytic optimal Bayesian solution for a subspace of the state space, and thus reduce the dimension of the state in which particle filtering is required. The concept will be demonstrated by examples.
- Sensor control for a reactive target. Sensor control is active or plan-ahead sensing, where sensing and Bayesian estimation are conducted sequentially in a closed loop.
Sensor control is typically formulated as partially observed Markov decision process (POMDP), where the reward is defined in the form of information gain. The main limitation of this paradigm is that it assumes that the target is non-reactive, that is, it does not change its behavior as a consequence of being observed. This tutorial will introduce game theory, as a mathematical framework for studying interactions between intelligent players, in the context of sensor control for a reactive target.
- Formulation of target tracking in the framework of possibility theory. Traditional target tracking algorithms are formulated in the framework of Bayesian probability theory.
The main limitation of the traditional approach is that probabilistic models of target dynamics and sensor measurements need top be known precisely. When probabilistic models are known only partially, possibility theory provides an alternative approach.
The tutorial will formulate Bayesian-like possibilistic tracking algorithms and demonstrate their performance.
- Bayesian simultaneous localisation and mapping. Simultaneous localisation and mapping (SLAM) can be formulated as a sequential Bayesian estimation problem, where a moving robot, equipped with a ranging sensor(s) and using odometry data, gradually creates a map of an unknown environment. The tutorial will explain the Bayesian SLAM algorithms (including the famous Gmapping algorithm) in the context of both feature-based and occupancy gridmaps.
Intended Audience
The tutorial is prepared for postgraduate students and researchers with basic knowledge of target tracking and particle filtering.
DIFA: Deep Learning-based Image Fusion and Its Applications (CANCELLED)
Instructors
- Xingchen Zhang
- Zhixiang Chen
- Shuyan Li
- Yiannis Demiris
Abstract
Deep learning-based image fusion has garnered considerable attention in recent years. However, despite massive progress, some challenges remain. Notably, there have not been any tutorials dedicated to image fusion. This tutorial will provide an introduction to deep learning-based image fusion and its applications, covering both methods and typical applications. It will also delve into other pertinent challenges, such as the need for comprehensive image fusion benchmarks. Hosting this tutorial now offers a timely opportunity to summarize the development of image fusion and introduce image fusion to more researchers. Additionally, this tutorial presents an opportunity for students and researchers in other fields to discover how they can benefit from image fusion technologies.
Image fusion has been a subject of study for over 30 years. The field experienced a significant milestone in around 2017 with the introduction of deep learning techniques. Since then, deep learning-based image fusion has attracted significant attention, leading to substantial progress in algorithms, datasets, and benchmarks. Furthermore, image fusion has demonstrated its utility across a wide range of applications, including object tracking, object detection, medical image processing, robotics, autonomous driving, scene segmentation, pedestrian and cyclist detection, salient object detection, power facility inspection, surveillance, face recognition, crowd counting, vital sign measurement, motion estimation, crack detection in civil infrastructure, and multi-physiological signal estimation.
The tutorial will introduce the latest advancements in deep learning-based image fusion and discuss its wide range of applications. This tutorial will offer a space for image fusion researchers to know both the advancements and challenges, and for the broad community in computer vision and robotics to discover how their applications can benefit from the development of image fusion. To the best of our knowledge, this will be the first tutorial to specifically focus on the intricacies of image fusion, highlighting the unique value of our DIFA tutorial in the context of related events. Particularly at a time when the field of image fusion is evolving rapidly, this tutorial will be invaluable in providing a forum to summarize past developments, delve into current trends, and anticipate the future directions of this dynamic field.
Contents
- Different image fusion tasks (40 min)
– Visible and infrared image fusion
– Multi-focus image fusion
– Multi-exposure image fusion
– Medical image fusion
– General image fusion
- Deep learning methods for image fusion (60 min)
– CNN-based methods
– GAN-based methods
– Autoencoder-based methods
– Transformer-based methods
– Diffusion model-based methods
– Application-driven image fusion methods
- Dataset and performance evaluation (40 min)
– Image fusion datasets
– Image fusion evaluation metrics
– Image fusion evaluation methods
– Image fusion benchmarks
- Image fusion applications (60 min)
– Object tracking
– Object detection
– Scene segmentation
– Robotics
– Others
- Future trends of deep learning-based image fusion (20 min)
- Open questions and discussions (20 min)
Intended Audience
We cordially invite students and researchers from the fields of image fusion, image registration, and computer vision to attend the tutorial and delve into recent advancements and challenges in image fusion. Additionally, students and researchers from other domains who have an interest in the application of image fusion are also encouraged to participate. This tutorial represents a unique opportunity to engage with the latest developments in the field and to network with peers who share a common interest in the potential of image fusion technologies.
Multiple Extended Object Tracking for Automotive Applications
Instructors
- Jens Honer
- Marcus Baum
Abstract
In order to safely navigate through traffic, an automated vehicle needs to be aware of the trajectories and dimensions of all dynamic objects (e.g., traffic participants) as well as the locations and dimensions of all stationary objects (e.g., road infrastructure). For this purpose, automated vehicles are equipped with modern high-resolution sensors like LIDAR, RADAR or cameras that allow to detect objects in the vicinity. Typically, the sensors generate multiple detections for each object, where the detections are unlabeled, i.e. it is unknown which of the objects was detected.
Furthermore, the detections are corrupted by sensor noise, e.g., some detections might be clutter, and some detections might be missing. The task of detecting and tracking an unknown number of moving spatially extended objects (e.g., traffic participants) based on noise-corrupted unlabeled measurements is called multiple extended object tracking. This tutorial will introduce state-of-the-art theory for multiple extended object tracking together with relevant real-world automotive applications. In particular, we will demonstrate applications for different object types, e.g., pedestrians, bicyclists, and cars, using different sensors such as LIDAR, RADAR, and camera.
Contents
Multiple extended object tracking:
- Modeling multiple extended object tracking with random finite sets
- Bayes’ Theorem with random finite sets
- Approximation schemes for Bayes’ Theorem and tractable random set filters and
- Connection between multi-target tracking and extended target tracking
Single extended object tracking:
- Models and methods for tracking elliptical and star-convex approximations of extended objects
- Learning-based approaches for extended object tracking
- Classification of extended objects
- Multi-sensor fusion for extended objects
Intended Audience
The tutorial aims at professionals and academics who are interested in the field of sensor fusion and tracking. As prerequisite, basic knowledge of sequential Bayesian estimation methods (such as Kalman filtering) is recommended.
After attending the tutorial, the participants will be familar with the state-of-the-art in multiple extended object tracking and environment modeling. They will be in the position in implement and evaluate track management, data association, shape estimation, and fusion methods for extended objects.
Sensor Fusion and Tracking with MATLAB®
Instructors
- Prashant Arora (The MathWorks, Inc.)
- Elad Kivelevitch (The MathWorks, Inc.)
Abstract
Sensor fusion and tracking form the backbone of numerous systems that require a comprehensive understanding of their operational environment. These systems range from autonomous vehicles navigating dynamic terrains to surveillance systems maintaining situational awareness across various domains such as air, space, maritime, and ground. These systems often integrate multiple sensor types—radar, cameras, infrared sensors, lidar, sonar, etc.—to achieve a robust and accurate perception of their surroundings.
Despite the prevalence of tracking algorithms in literature, such as the Kalman filter since the 1960s, practitioners often find themselves coding these algorithms from scratch for their specific use cases. This can be time-consuming, error-prone, and a hindrance to innovation. Additionally, testing and analyzing these algorithms with simulated or real data using advanced metrics can be equally challenging. The tutorial offers a comprehensive suite of features to address these challenges:
- Common motion and measurement models
- Tracking filters
- Assignment algorithms
- Multi-object trackers
- Track-to-track fusion algorithms
- A simulation environment for scenario definition and sensor modeling
- A set of detailed and score-based metrics for tracking performance evaluation
- Visualization tools
With these tools, users can more efficiently prototype new solutions, evaluate their performance, and generate C/C++ code for deployment on hardware systems.
Contents
Part 1: Introduction to tracking with MATLAB®
Part 2: Examples
– Tracking with a radar, illustrated with a Texas Instruments mmWave radar.
– Tracking with a camera, illustrated with a phone camera.
– Tracking for autonomous vehicles, integrating camera, lidar, and radar sensors
– Wide area surveillance tracking utilizing active and passive radars.
Additionally, participants will have the opportunity to select from a broader set of examples from a website.
Intended Audience
This tutorial is designed for a diverse audience including students new to the field, recent graduates, industry practitioners, academic researchers, government employees, and professors seeking practical tools for their courses. The content will be accessible to those looking to grasp the foundational concepts of sensor fusion and tracking as well as those aiming to implement these techniques in real-world scenarios using MATLAB.
Contact
Tutorials:
tutorials@fusion2024.org