Show simple item record

dc.contributor.authorBae, Egilen_GB
dc.date.accessioned2020-12-18T14:20:34Z
dc.date.accessioned2021-01-06T09:41:30Z
dc.date.available2020-12-18T14:20:34Z
dc.date.available2021-01-06T09:41:30Z
dc.date.issued2020
dc.identifier.citationBae E. Automatic object recognition within point clouds in clustered or scattered scenes. Proceedings of SPIE, the International Society for Optical Engineering. 2020;11538:1-20en_GB
dc.identifier.urihttp://hdl.handle.net/20.500.12242/2817
dc.descriptionBae, Egil. Automatic object recognition within point clouds in clustered or scattered scenes. Proceedings of SPIE, the International Society for Optical Engineering 2020 ;Volum 11538. s. 1-20en_GB
dc.description.abstractWe consider the problem of automatically locating, classifying and identifying an object within a point cloud that has been acquired by scanning a scene with a ladar. The recent work [E. Bae, Automatic scene understanding and object identification in point clouds, Proceedings of SPIE Volume 11160, 2019] approached the problem by first segmenting the point cloud into multiple classes of similar objects, before a more sophisticated and computationally demanding algorithm attempted to recognize/identify individual objects within the relevant class. The overall approach could find and identify partially visible objects with high confidence, but has the potential of failing if the object of interest is placed right next to other objects from the same class, or if the object in interest is scattered into several disjoint parts due to occlusions or slant view angles. This paper proposes an improvement of the algorithm that allows it to handle both clustered and scattered scenarios in a unified way. It proposes an intermediate step between segmentation and recognition that extracts objects from the relevant class based on similarity between their distance function and the distance function of a reference shape for different view angles. The similarity measure accounts for occlusions and partial visibility naturally, and can be expressed analytically in the distance coordinate for azimuth and elevation angles within the field of view (FOV). This reduces the dimensions for which to search from three to two. Furthermore, calculations can be limited to parts of the FOV corresponding to the relevant segmented region. In consequence, the computational efficiency of the algorithm is high and it is possible to match against the reference shape for multiple discrete view angles. The subsequent recognition step analyzes the extracted objects in more details and avoids suffering from discretization and conversion errors. The algorithm is demonstrated in various maritime examples.en_GB
dc.language.isoenen_GB
dc.subjectSceneanalyseen_GB
dc.subjectLidaren_GB
dc.titleAutomatic object recognition within point clouds in clustered or scattered scenesen_GB
dc.date.updated2020-12-18T14:20:33Z
dc.identifier.cristinID1861109
dc.identifier.doihttps://doi.org/10.1117/12.2574119
dc.source.issn0277-786X
dc.source.issn1996-756X
dc.type.documentJournal article
dc.relation.journalProceedings of SPIE, the International Society for Optical Engineering


Files in this item

This item appears in the following Collection(s)

Show simple item record