Show simple item record

dc.contributor.authorTarabalka, Yulija
dc.contributor.authorHaavardsholm, Trym Vegard
dc.contributor.authorKåsen, Ingebjørg
dc.contributor.authorSkauli, Torbjørn
dc.date.accessioned2017-11-01T12:23:40Z
dc.date.accessioned2017-11-02T14:41:00Z
dc.date.available2017-11-01T12:23:40Z
dc.date.available2017-11-02T14:41:00Z
dc.date.issued2009
dc.identifier.citationTarabalka Y, Haavardsholm TV, Kåsen I, Skauli T. Real-time anomaly detection in hyperspectral images using multivariate normal mixture models and GPU processing. Journal of Real-Time Image Processing. 2009;4(3):287-300en_GB
dc.identifier.urihttp://hdl.handle.net/20.500.12242/772
dc.identifier.urihttps://ffi-publikasjoner.archive.knowledgearc.net/handle/20.500.12242/772
dc.descriptionTarabalka, Yulija; Haavardsholm, Trym Vegard; Kåsen, Ingebjørg; Skauli, Torbjørn. Real-time anomaly detection in hyperspectral images using multivariate normal mixture models and GPU processing. Journal of Real-Time Image Processing 2009 ;Volum 4.(3) s. 287-300en_GB
dc.description.abstractHyperspectral imaging, which records a detailed spectrum of light arriving in each pixel, has many potential uses in remote sensing as well as other application areas. Practical applications will typically require real-time processing of large data volumes recorded by a hyperspectral imager. This paper investigates the use of graphics processing units (GPU) for such real-time processing. In particular, the paper studies a hyperspectral anomaly detection algorithm based on normal mixture modelling of the background spectral distribution, a computationally demanding task relevant to military target detection and numerous other applications. The algorithm parts are analysed with respect to complexity and potential for parallellization. The computationally dominating parts are implemented on an Nvidia GeForce 8800 GPU using the Compute Unified Device Architecture programming interface. GPU computing performance is compared to a multicore central processing unit implementation. Overall, the GPU implementation runs significantly faster, particularly for highly data-parallelizable and arithmetically intensive algorithm parts. For the parts related to covariance computation, the speed gain is less pronounced, probably due to a smaller ratio of arithmetic to memory access. Detection results on an actual data set demonstrate that the total speedup provided by the GPU is sufficient to enable real-time anomaly detection with normal mixture models even for an airborne hyperspectral imager with high spatial and spectral resolution.en_GB
dc.language.isoenen_GB
dc.titleReal-time anomaly detection in hyperspectral images using multivariate normal mixture models and GPU processingen_GB
dc.typeArticleen_GB
dc.date.updated2017-11-01T12:23:40Z
dc.identifier.cristinID514626
dc.identifier.cristinID514626
dc.identifier.doi10.1007/s11554-008-0105-x
dc.source.issn1861-8200
dc.source.issn1861-8219
dc.type.documentJournal article
dc.relation.journalJournal of Real-Time Image Processing


Files in this item

This item appears in the following Collection(s)

Show simple item record