Show simple item record

dc.contributor.authorOrescanin, Marcoen_GB
dc.contributor.authorHarrington, Brianen_GB
dc.contributor.authorOlson, Derek R.en_GB
dc.contributor.authorGeilhufe, Marcen_GB
dc.contributor.authorHansen, Roy Edgaren_GB
dc.contributor.authorWarakagoda, Narada Dilpen_GB
dc.date.accessioned2024-02-21T12:05:54Z
dc.date.accessioned2024-11-22T10:16:00Z
dc.date.available2024-02-21T12:05:54Z
dc.date.available2024-11-22T10:16:00Z
dc.date.issued2023
dc.identifier.citationOrescanin M, Harrington B, Olson DR, Geilhufe MG, Hansen RE, Warakagoda ND. Uncertainty quantification with deep learning through variational inference with applications to synthetic aperture sonar. Underwater Acoustics Conference & Exhibition (UACE). 2023en_GB
dc.identifier.urihttp://hdl.handle.net/20.500.12242/3370
dc.descriptionOrescanin, Marco; Harrington, Brian; Olson, Derek R.; Geilhufe, Marc; Hansen, Roy Edgar; Warakagoda, Narada Dilp. Uncertainty quantification with deep learning through variational inference with applications to synthetic aperture sonar. Underwater Acoustics Conference & Exhibition (UACE) 2023en_GB
dc.description.abstractDeep learning (DL) has gained popularity in applications and research within the active sonar community both in academic and commercial settings due to the ability of such models to learn complex non-linear relationships between the input features and the labels, a data driven approach. This has led to significant improvements in automatic target recognition and seafloor texture understanding with Synthetic Aperture Sonar (SAS). Most of the DL models reported in literature are deterministic and do not provide estimates of uncertainty of their predictions limiting the utility for the downstream tasks such as ATR and change detection.\n\nIn this work we demonstrate the ability to quantify uncertainty in deep learning predictions by utilizing variational inference to develop Bayesian Neural Networks. Further, we explore decomposition of obtained uncertainty into aleatoric and epistemic components.We showcase the importance of this decomposition on classifier performance and interpretation of the results. We introduce and compare several state-of-the art methods in variational inference on the task of classifying imaging artifacts in SAS. We conduct this on a novel dataset developed for this classification task through introduction of physical perturbations in the image formation stage, namely : 1) sound speed error of 40 m/s, 2) navigation error through perturbation in yaw of 0.35° and 3) Gaussian noise over the imaging channels prior to pulse compression (lowering the average\nimage SNR to 5 dB). \n\nOverall, we demonstrate that our best model, a mean-field variational inference via flipout ResNet architecture, achieves 92% accuracy with calibrated uncertainty. By rejecting 10% of the data with highest uncertainty we achieve additional 4% improvement in accuracy.en_GB
dc.language.isoenen_GB
dc.relation.urihttps://www.uaconferences.org/component/contentbuilder/details/38/53/uace2023-uncertainty-quantification-with-deep-learning-through-variational-inference-with-applications-to-synthetic-aperture-sonar?
dc.subjectDyp læringen_GB
dc.subjectSyntetisk apertur-sonaren_GB
dc.titleUncertainty quantification with deep learning through variational inference with applications to synthetic aperture sonaren_GB
dc.date.updated2024-02-21T12:05:54Z
dc.identifier.cristinID2220504
dc.source.issn2408-0195
dc.type.documentJournal article
dc.relation.journalUnderwater Acoustics Conference & Exhibition (UACE)


Files in this item

This item appears in the following Collection(s)

Show simple item record