Print Friendly, PDF & Email

Sensor fusion is the ability to bring together inputs from multiple radars, lidars and cameras to form a single model or image of the environment around a vehicle.

The resulting model is more accurate because it balances the strengths of the different sensors.

The data sources for a fusion process are not specified to originate from identical sensors. One can distinguish direct fusionindirect fusion and fusion of the outputs of the former two.

Direct fusion is the fusion of sensor data from a set of heterogeneous or homogeneous sensors, soft sensors, and history values of sensor data, while indirect fusion uses information sources like a priori knowledge about the environment and human input.

Sensor fusion is also known as (multi-sensor) data fusion and is a subset of information fusion.

The objective of data fusion is to improve overall system performance, including:

  • Improved decision making
  • Increased detection capabilities
  • Diminished number of false alarms
  • Improved reliability.

Different data fusion methods have been developed in order to optimize the overall system output in a variety of applications for which data fusion might be useful: security (humanitarian, military), medical diagnosis, environmental monitoring, remote sensing, robotics, etc.

The concept of sensor fusion attempts to replicate the capability of the central nervous system to process sensory inputs from multiple sensors simultaneously.

For robotic devices, for example, feedback from one sensor is typically not enough, particularly for implementation of control algorithms. Sensor fusion can be used to compensate for deficiencies in information by utilizing feedback from multiple sensors.

There are several categories or levels of sensor fusion that are commonly used:

  • Level 0 – Data alignment
  • Level 1 – Entity assessment (e.g., signal/feature/object).Tracking and object detection/recognition/identification
  • Level 2 – Situation assessment
  • Level 3 – Impact assessment
  • Level 4 – Process refinement (i.e., sensor management)
  • Level 5 – User refinement

Sensor fusion level can also be defined basing on the kind of information used to feed the fusion algorithm. More precisely, sensor fusion can be performed fusing raw data coming from different sources, extrapolated features or even decision made by single nodes.

Want to know more? Tonex offers Sensor and Data Fusion Training Bootcamp, a 3-day course that covers technologies, tools and methods to automatically manage multi sensor data filtering, aggregation, extraction and fusing data useful to intelligence analysts and war fighters.

Learn about application of artificial neural network technology to data fusion for target recognition, airborne target recognition, activity-based intelligence, C4ISR, Electronic Warfare (EW), radar and EO-IR thermal imaging sensors, missile defense, cyber warfare, air, space and maritime surveillance, net-centric warfare, Activity-based Intelligence, effects-based operations process control, proactive maintenance and industrial automation.

For more information, questions, comments, contact us.

Request More Information

  • Please complete the following form and a Tonex Training Specialist will contact you as soon as is possible.

    * Indicates required fields

  • This field is for validation purposes and should be left unchanged.