Surveillance systems heavily rely on data collected by multi-modality sensors to detect and characterize behavior of entities and events over a given situation. In order to transform the multi-modality sensors data into useful information leading to actionable information, there is an essential need for a robust data fusion model. A robust fusion model should be able to acquire data from multi-agent sensors and take advantage of spatio-temporal characteristics of multi-modality sensors to create a better situational awareness ability and in particular, assisting with soft fusion of multi-threaded information from variety of sensors under task uncertainties. This research presents a novel Image-based model for multi-modality data fusion. The concept of this fusion model is biologically-inspired by the human brain energy perceptual model. Similar to the human brain having designated regions to map immediate sensory experiences and fusing collective heterogeneous sensory perceptions to create a situational understanding for decision-making, the proposed image-based fusion model follows an analogous data to information fusion scheme for actionable decision-making.
Le informazioni nella sezione "Riassunto" possono far riferimento a edizioni diverse di questo titolo.
Dr. Aaron Rasheed Rababaah is an Associate Professor of Computer Science at the American University of Kuwait. He Holds a BSc in Industrial Eng, MSc in Computer Science and PhD in Computer Systems Engineering. He has 7-year teaching experience at four universities. His research interests include: intelligent systems, machine vision and robotics.
Le informazioni nella sezione "Su questo libro" possono far riferimento a edizioni diverse di questo titolo.
(nessuna copia disponibile)
Cerca: Inserisci un desiderataNon riesci a trovare il libro che stai cercando? Continueremo a cercarlo per te. Se uno dei nostri librai lo aggiunge ad AbeBooks, ti invieremo una notifica!
Inserisci un desiderata