The Science and Information (SAI) Organization
  • Home
  • About Us
  • Journals
  • Conferences
  • Contact Us

Publication Links

  • IJACSA
  • Author Guidelines
  • Publication Policies
  • Digital Archiving Policy
  • Promote your Publication
  • Metadata Harvesting (OAI2)

IJACSA

  • About the Journal
  • Call for Papers
  • Editorial Board
  • Author Guidelines
  • Submit your Paper
  • Current Issue
  • Archives
  • Indexing
  • Fees/ APC
  • Reviewers
  • Apply as a Reviewer

IJARAI

  • About the Journal
  • Archives
  • Indexing & Archiving

Special Issues

  • Home
  • Archives
  • Proposals
  • Guest Editors
  • SUSAI-EE 2025
  • ICONS-BA 2025
  • IoT-BLOCK 2025

Future of Information and Communication Conference (FICC)

  • Home
  • Call for Papers
  • Submit your Paper/Poster
  • Register
  • Venue
  • Contact

Computing Conference

  • Home
  • Call for Papers
  • Submit your Paper/Poster
  • Register
  • Venue
  • Contact

Intelligent Systems Conference (IntelliSys)

  • Home
  • Call for Papers
  • Submit your Paper/Poster
  • Register
  • Venue
  • Contact

Future Technologies Conference (FTC)

  • Home
  • Call for Papers
  • Submit your Paper/Poster
  • Register
  • Venue
  • Contact
  • Home
  • Call for Papers
  • Editorial Board
  • Guidelines
  • Submit
  • Current Issue
  • Archives
  • Indexing
  • Fees
  • Reviewers
  • Subscribe

DOI: 10.14569/IJACSA.2024.01503120
PDF

Enhancing Model Robustness and Accuracy Against Adversarial Attacks via Adversarial Input Training

Author 1: Ganesh Ingle
Author 2: Sanjesh Pawale

International Journal of Advanced Computer Science and Applications(IJACSA), Volume 15 Issue 3, 2024.

  • Abstract and Keywords
  • How to Cite this Article
  • {} BibTeX Source

Abstract: Adversarial attacks present a formidable challenge to the integrity of Convolutional Neural Network-Long Short-Term Memory (CNN-LSTM) models, particularly in the domain of power quality disturbance (PQD) classification, necessitating the development of effective defense mechanisms. These attacks, characterized by their subtlety, can significantly degrade the performance of models critical for maintaining power system stability and efficiency. This study introduces the concept of adversarial attacks on CNN-LSTM models and emphasizes the critical need for robust defenses.We propose Input Adversarial Training (IAT) as a novel defense strategy aimed at enhancing the resilience of CNN-LSTM models. IAT involves training models on a blend of clean and adversarially perturbed inputs, intending to improve their robustness. The effectiveness of IAT is assessed through a series of comparisons with established defense mech-anisms, employing metrics such as accuracy, precision, recall, and F1-score on both unperturbed and adversarially modified datasets.The results are compelling: models defended with IAT exhibit remarkable improvements in robustness against adver-sarial attacks. Specifically, IAT-enhanced models demonstrated an increase in accuracy on adversarially perturbed data to 85%, a precision improvement to 86%, a recall rise to 85%, and an F1-score enhancement to 85.5%. These figures significantly surpass those achieved by models utilizing standard adversarial training (75% accuracy) and defensive distillation (70% accuracy), showcasing IAT’s superior capacity to maintain model accuracy under adversarial conditions.In conclusion, IAT stands out as an effective defense mechanism, significantly bolstering the resilience of CNN-LSTM models against adversarial perturbations. This research not only sheds light on the vulnerabilities of these models to adversarial attacks but also establishes IAT as a benchmark in defense strategy development, promising enhanced security and reliability for PQD classification and related applications.

Keywords: Adversarial attacks; Input Adversarial Training (IAT); deep learning security; model robustness

Ganesh Ingle and Sanjesh Pawale, “Enhancing Model Robustness and Accuracy Against Adversarial Attacks via Adversarial Input Training” International Journal of Advanced Computer Science and Applications(IJACSA), 15(3), 2024. http://dx.doi.org/10.14569/IJACSA.2024.01503120

@article{Ingle2024,
title = {Enhancing Model Robustness and Accuracy Against Adversarial Attacks via Adversarial Input Training},
journal = {International Journal of Advanced Computer Science and Applications},
doi = {10.14569/IJACSA.2024.01503120},
url = {http://dx.doi.org/10.14569/IJACSA.2024.01503120},
year = {2024},
publisher = {The Science and Information Organization},
volume = {15},
number = {3},
author = {Ganesh Ingle and Sanjesh Pawale}
}



Copyright Statement: This is an open access article licensed under a Creative Commons Attribution 4.0 International License, which permits unrestricted use, distribution, and reproduction in any medium, even commercially as long as the original work is properly cited.

IJACSA

Upcoming Conferences

Computer Vision Conference (CVC) 2026

16-17 April 2026

  • Berlin, Germany

Healthcare Conference 2026

21-22 May 2026

  • Amsterdam, The Netherlands

Computing Conference 2025

19-20 June 2025

  • London, United Kingdom

IntelliSys 2025

28-29 August 2025

  • Amsterdam, The Netherlands

Future Technologies Conference (FTC) 2025

6-7 November 2025

  • Munich, Germany
The Science and Information (SAI) Organization
BACK TO TOP

Computer Science Journal

  • About the Journal
  • Call for Papers
  • Submit Paper
  • Indexing

Our Conferences

  • Computing Conference
  • Intelligent Systems Conference
  • Future Technologies Conference
  • Communication Conference

Help & Support

  • Contact Us
  • About Us
  • Terms and Conditions
  • Privacy Policy

© The Science and Information (SAI) Organization Limited. All rights reserved. Registered in England and Wales. Company Number 8933205. thesai.org