![]() Adversarial attacks pose a serious threat to the success of deep learning in practice. For images, such perturbations are often too small to be perceptible, yet they completely fool the deep learning models. ![]() Whereas deep neural networks have demonstrated phenomenal success (often beyond human capabilities) in solving complex problems, recent studies show that they are vulnerable to adversarial attacks in the form of subtle perturbations to inputs that lead a model to predict incorrect outputs. In the field of Computer Vision, it has become the workhorse for applications ranging from self-driving cars to surveillance and security. #Detectx download mac codeThe code has been released at ĭeep learning is at the heart of the current rise of machine learning and artificial intelligence. Moreover, we achieve high detection performance (ROC-AUC > 0.95) for strong white-box and black-box attacks. Our experiments show that DetectX is 10x-25x more energy efficient and immune to dynamic adversarial attacks compared to previous state-of-the-art works. We perform hardware evaluation of the Neurosim+DetectX system on the Neurosim platform using datasets-CIFAR10(VGG8), CIFAR100(VGG16) and TinyImagenet(ResNet18). For hardware-based adversarial detection, we implement the DetectX module using 32nm CMOS circuits and integrate it with a Neurosim-like analog crossbar architecture. Hence, we propose a dual-phase training methodology: Phase1 training is geared towards increasing the separation between clean and adversarial SoIs Phase2 training improves the overall robustness against different strengths of adversarial attacks. However, the difference is too small for reliable adversarial detection. We show that adversarial inputs have higher SoI compared to clean inputs. ![]() To this end, we propose DetectX - a hardware friendly adversarial detection mechanism using hardware signatures like Sum of column Currents (SoI) in memristive crossbars (XBar). ![]() These approaches are computationally intensive and vulnerable to adversarial attacks. Most prior works use neural network-based detectors or complex statistical analysis for adversarial detection. Adversarial input detection has emerged as a prominent technique to harden Deep Neural Networks(DNNs) against adversarial attacks. ![]()
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |