dc.contributor.author | Løkken, Kristin | en_GB |
dc.contributor.author | Aurdal, Lars | en_GB |
dc.contributor.author | Brattli, Alvin Andreas | en_GB |
dc.contributor.author | Palm, Hans Christian | en_GB |
dc.date.accessioned | 2020-09-24T06:41:44Z | |
dc.date.accessioned | 2020-09-29T13:54:36Z | |
dc.date.available | 2020-09-24T06:41:44Z | |
dc.date.available | 2020-09-29T13:54:36Z | |
dc.date.issued | 2020-09-20 | |
dc.identifier.citation | Løkken KHL, Aurdal L, Brattli AA, Palm HC. Investigating robustness of adversarial camouflage (AC) fornaval vessels. Proceedings of SPIE, the International Society for Optical Engineering. 2020;11543(115430G) | en_GB |
dc.identifier.uri | http://hdl.handle.net/20.500.12242/2772 | |
dc.description | Løkken, Kristin; Aurdal, Lars; Brattli, Alvin Andreas; Palm, Hans Christian.
Investigating robustness of adversarial camouflage (AC) for naval vessels. Proceedings of SPIE, the International Society for Optical Engineering 2020 ;Volum 11543.(115430G) | en_GB |
dc.description.abstract | The use of camouflage is widespread in the biological domain, and has also been used extensively by armed
forces around the world in order to make visual detection and classification of objects of military interest more
difficult. The recent advent of ever more autonomous military agents raises the questions of whether camouflage
can have a similar effect on autonomous agents as it has on human agents, and if so, what kind of camouflage
will be effective against such adversaries.
In previous works, we have shown that image classifiers based on deep neural networks can be confused by
patterns generated by generative adversarial networks (GANs). Specifically, we trained a classifier to distinguish between two ship types, military and civilian. We then used a GAN to generate patterns that, when overlaid on parts of military vessels (frigates), made the classifier confuse the modified frigates with civilian vessels. We termed such patterns "adversarial camouflage" (AC) since these patterns effectively camouflage the frigates with respect to the classifier.
The type of adversarial attack described in our previous work is a so-called white box attack. This term describes
adversarial attacks that are devised given full knowledge of the classifier under attack. This is as opposed to
black box attacks, which describe attacks on unknown classifiers. In our context, the ultimate goal is to design a GAN that is capable of black box attacks, in other words: a GAN that will generate AC that has effect across a wide range of neural network classifiers.
In the current work, we study techniques to improve the robustness of our GAN-based approach by investigating
whether a GAN can be trained to fool a selection of several neural network-based classifiers, or reduce the
confidence of the classifications to a degree which makes them unreliable. Our results indicate that it is indeed
possible to weaken a wider range of neural network classifiers by training the generator on several classifiers. | en_GB |
dc.language.iso | en | en_GB |
dc.subject | Maskinlæring | en_GB |
dc.subject | Nevrale nettverk | en_GB |
dc.subject | Kamuflasje | en_GB |
dc.subject | Marinefartøyer | en_GB |
dc.title | Investigating robustness of adversarial camouflage (AC) for naval vessels | en_GB |
dc.date.updated | 2020-09-24T06:41:44Z | |
dc.identifier.cristinID | 1832163 | |
dc.identifier.doi | 10.1117/12.2573676 | |
dc.source.issn | 0277-786X | |
dc.source.issn | 1996-756X | |
dc.type.document | Journal article | |
dc.relation.journal | Proceedings of SPIE, the International Society for Optical Engineering | |