Fooling Neural Networks in the Physical World with 3D Adversarial Objects · labsix

https://www.labsix.org/physical-objects-that-fool-neural-nets/

lab six archive papers press about 31 Oct 2017 · 3 min read — shared on Hacker News , Lobsters , Reddit , Twitter We’ve developed an approach to generate 3D adversarial objects that reliably fool neural networks in the real world, no matter how the objects are looked at. Neural network based classifiers reach near-human performance in many tasks, and they’re used in high risk, real world systems. Yet, these same neural networks are particularly vulnerable to adversarial examples , carefully perturbed inputs that cause targeted misclassification. One example is the tabby cat below, which we perturbed to look like a guacamole to Google’s InceptionV3 image classifier. However, adversarial examples generated using standard techniques break down when transferred into the real world as a result of zoom, camera noise, and other transformations that are ine...

Linked on 2020-12-25 20:09:43 | Similar Links