Future of Information and Communication Conference (FICC) 2024
4-5 April 2024
Publication Links
IJACSA
Special Issues
Future of Information and Communication Conference (FICC)
Computing Conference
Intelligent Systems Conference (IntelliSys)
Future Technologies Conference (FTC)
International Journal of Advanced Computer Science and Applications(IJACSA), Volume 15 Issue 1, 2024.
Abstract: This research investigates the escalating issue of adversarial attacks on neural networks within AI security, specifically targeting image recognition using the MNIST dataset. Our exploration centered on the potential of a combined approach incorporating feature masking and gradient manipulation to bolster adversarial defense. The main objective was to evaluate the extent to which this integrated strategy enhances network resilience against such attacks, contributing to the advancement of more robust AI systems. In our experimental framework, we utilized a conventional neural network architecture, integrating various levels of feature masking alongside established training protocols. A baseline model, devoid of feature masking, functioned as a comparative standard to gauge the efficacy of our proposed technique. We assessed the model’s performance in standard scenarios as well as under Fast Gradient Sign Method (FGSM) adversarial assaults. The outcomes provided significant insights. The baseline model demonstrated a high test accuracy of 98% on the MNIST dataset, yet it showed limited resistance to adversarial incursions, with accuracy diminishing to 60% under FGSM onslaughts. Conversely, models incorporating feature masking exhibited a reciprocal relationship between masking proportion and accuracy, counterbalanced by an enhancement in adversarial resilience. Specifically, a 10% masking ratio achieved a 96% accuracy rate coupled with a 75% robustness against attacks, a 30% masking led to a 94% accuracy with an 80%robustness level, and a 50% masking threshold resulted in a 92% accuracy, attaining the apex of robustness at 85%. These results affirm the efficacy of feature masking in augmenting adversarial defense, highlighting a pivotal equilibrium between accuracy and resilience. The study lays the groundwork for further investigations into refined masking methodologies and their amalgamation with other defensive strategies, potentially broadening the scope of neural network security against adver-sarial threats. Our contributions are significant to the realm of AI security, showcasing an effective strategy for the development of more secure and dependable neural network frameworks.
Ganesh Ingle and Sanjesh Pawale, “Enhancing Adversarial Defense in Neural Networks by Combining Feature Masking and Gradient Manipulation on the MNIST Dataset” International Journal of Advanced Computer Science and Applications(IJACSA), 15(1), 2024. http://dx.doi.org/10.14569/IJACSA.2024.01501114
@article{Ingle2024,
title = {Enhancing Adversarial Defense in Neural Networks by Combining Feature Masking and Gradient Manipulation on the MNIST Dataset},
journal = {International Journal of Advanced Computer Science and Applications},
doi = {10.14569/IJACSA.2024.01501114},
url = {http://dx.doi.org/10.14569/IJACSA.2024.01501114},
year = {2024},
publisher = {The Science and Information Organization},
volume = {15},
number = {1},
author = {Ganesh Ingle and Sanjesh Pawale}
}
Copyright Statement: This is an open access article licensed under a Creative Commons Attribution 4.0 International License, which permits unrestricted use, distribution, and reproduction in any medium, even commercially as long as the original work is properly cited.