De grootste kennisbank van het HBO

Inspiratie op jouw vakgebied

Vrij toegankelijk

Terug naar zoekresultatenDeel deze publicatie

Modelling human-like visual persception for intelligent multi-modal information fusion

Open access

Rechten:

Modelling human-like visual persception for intelligent multi-modal information fusion

Open access

Rechten:

Samenvatting

Military sensors are being used to create situational awareness. In a process of multi sensor data fusion, a sensor grid containing a multitude of similar and different sensor types contributes to the overall awareness by gathering and combining input data. Such data fusion has been applied in numerous military applications including ocean surveillance, air-to-air defence, battlefield intelligence, surveillance and target acquisition, and strategic warning and defence. Regarding the sensor grid there areseveral recent developments, i.e., sensor types become multimodal and mobile sensordeployment will be increasingly autonomous. Multimodal means using multiple modalities, which are different types of physical phenomenon that can be sensed, such as light and sound. In terms of military applications one can think of a combination of radar and electro-optical systems, or electro-optical combined with acoustic. Non-military examples of incorporating multimodal fusion include: enhancing automatic speech recognition with visual features, and person identity verification. The aim of our research is to design and implement an autonomous and adaptive surveillance system based on a video and acoustic sensor. In order to achieve our goal, weneed to implement a suitable data fusion framework and fitting fusion techniques. The fusion of the two modalities: vision and audio, has to solve the problems of ambiguity, redundancy and synchronicity in a seamless manner. The idea is that the surveillance system takes over the task of the human observer, which means interpreting the scene and spotting for aggressive or other illegal activities that will have to be reported back to surveillance personnel who can then take the appropriate action. In order to achieve autonomous surveillance with a multimodal, intelligent sensor webelieve that understanding and modelling human perception is at the crux of making intelligent context sensitive systems that try to make sense out of an overwhelming amount of data coming in through their sensors. In this article the authors will focus purely on the visual part of our fusion model and present our computational model for visual perception including the results they have so far.

Toon meer
Gepubliceerd inSensors, weapons, C4I and operations research Faculty of Military Sciences of the NLDA, Faculty Research Office, Breda, Vol. 2008, Pagina's: 119-133
Jaar2008
TypeBoekdeel
TaalEngels

Op de HBO Kennisbank vind je publicaties van 26 hogescholen

De grootste kennisbank van het HBO

Inspiratie op jouw vakgebied

Vrij toegankelijk