Future of Information and Communication Conference (FICC) 2025
28-29 April 2025
Publication Links
IJACSA
Special Issues
Future of Information and Communication Conference (FICC)
Computing Conference
Intelligent Systems Conference (IntelliSys)
Future Technologies Conference (FTC)
International Journal of Advanced Computer Science and Applications(IJACSA), Volume 16 Issue 1, 2025.
Abstract: The inherent biases present in language models often lead to discriminatory predictions based on demographic attributes. Fairness in NLP refers to the goal of ensuring that language models and other NLP systems do not produce biased or discriminatory outputs that could negatively affect individuals or groups. Bias in NLP models often arises from training data that reflects societal stereotypes or imbalances. Robustness in NLP refers to the ability of a model to maintain performance when faced with noisy, adversarial, or out-of-distribution data. A robust NLP model should handle variations in input effectively without failing or producing inaccurate results. The proposed approach employs a novel metric called CFRE (Context-Sensitive Fairness and Robustness Evaluation) designed to measure both fairness and robustness of an NLP model under different contextual shifts. Next, it projected the benefits of this metric in terms of experimental parameters. Next, the work integrated counterfactual data augmentation with help of Self-Imitation Reinforcement Learning (SIL) to reinforce successful counterfactual generation by enabling the model to learn from its own high-reward experiences, fostering a more balanced understanding of language. The integration of SIL allows for efficient exploration of the action space, guiding the model to consistently produce unbiased outputs across different contexts. The proposed approach demonstrates the effectiveness of our method through extensive experimentation and compared the results of the proposed metric with that of WEAT and SMART testing, and showed a significant reduction in bias without compromising the model's overall performance. This framework not only addresses bias in existing models but also contributes to a more robust methodology for training fairer NLP systems. Both the proposed metric and SIL showed better results in experimental parameters.
K. C. Sreedhar, T. Kavya, J. V. S. Rajendra Prasad and V. Varshini, “A Novel Metric-Based Counterfactual Data Augmentation with Self-Imitation Reinforcement Learning (SIL)” International Journal of Advanced Computer Science and Applications(IJACSA), 16(1), 2025. http://dx.doi.org/10.14569/IJACSA.2025.0160163
@article{Sreedhar2025,
title = {A Novel Metric-Based Counterfactual Data Augmentation with Self-Imitation Reinforcement Learning (SIL)},
journal = {International Journal of Advanced Computer Science and Applications},
doi = {10.14569/IJACSA.2025.0160163},
url = {http://dx.doi.org/10.14569/IJACSA.2025.0160163},
year = {2025},
publisher = {The Science and Information Organization},
volume = {16},
number = {1},
author = {K. C. Sreedhar and T. Kavya and J. V. S. Rajendra Prasad and V. Varshini}
}
Copyright Statement: This is an open access article licensed under a Creative Commons Attribution 4.0 International License, which permits unrestricted use, distribution, and reproduction in any medium, even commercially as long as the original work is properly cited.