Computer Vision Conference (CVC) 2026
21-22 May 2026
Publication Links
IJACSA
Special Issues
Computer Vision Conference (CVC)
Computing Conference
Intelligent Systems Conference (IntelliSys)
Future Technologies Conference (FTC)
International Journal of Advanced Computer Science and Applications(IJACSA), Volume 16 Issue 9, 2025.
Abstract: Figurative language, especially sarcasm, poses strong challenges for Natural Language Processing (NLP) models because of its implicit, context-sensitive nature. Both traditional and transformer models tend to find it difficult to identify these subtle forms, particularly when dealing with imbalanced datasets or without mechanisms for targeted interpretability. For overcoming these shortcomings, this study recommends a hybrid deep learning architecture that integrates RoBERTa for high-contextual embedding, Bidirectional Gated Recurrent Units (BiGRU) to capture bidirectional sequential relations, and an attention mechanism allowing the model to focus on the most informative parts of the input text. This integration enhances semantic understanding and classification accuracy compared to current solutions. The model is trained and tested on the benchmark News Headlines Dataset for Sarcasm Detection using binary cross-entropy loss minimized with Adam, along with dropout and learning rate scheduling to avoid overfitting. Experimental results yield better performance, attaining an accuracy of 92.4%, a precision of 91.1%, a recall of 93.2%, and an F1-score of 92.1%. These results outperform baseline techniques such as BiLSTM with attention and fine-tuned BERT variants. Implementation uses PyTorch and Hugging Face Transformers, ensuring reproducibility and extensibility. While effective, the model faces challenges with figurative expressions requiring external world knowledge or cultural context beyond pretrained embeddings. Future work aims to integrate external knowledge graphs and extend the model to multilingual and cross-domain scenarios. This hybrid framework advances figurative language detection, contributing to the broader goal of enhancing AI’s nuanced understanding and interpretability of human language.
Sreeja Balakrishnan, Rahul Suryodai, S. Manochitra, Jasgurpreet Singh Chohan, Karaka Ramakrishna Reddy, A. Smitha Kranthi and Ritu Sharma. “A Hybrid RoBERTa-BiGRU-Attention Model for Accurate and Context-Aware Figurative Language Detection”. International Journal of Advanced Computer Science and Applications (IJACSA) 16.9 (2025). http://dx.doi.org/10.14569/IJACSA.2025.0160950
@article{Balakrishnan2025,
title = {A Hybrid RoBERTa-BiGRU-Attention Model for Accurate and Context-Aware Figurative Language Detection},
journal = {International Journal of Advanced Computer Science and Applications},
doi = {10.14569/IJACSA.2025.0160950},
url = {http://dx.doi.org/10.14569/IJACSA.2025.0160950},
year = {2025},
publisher = {The Science and Information Organization},
volume = {16},
number = {9},
author = {Sreeja Balakrishnan and Rahul Suryodai and S. Manochitra and Jasgurpreet Singh Chohan and Karaka Ramakrishna Reddy and A. Smitha Kranthi and Ritu Sharma}
}
Copyright Statement: This is an open access article licensed under a Creative Commons Attribution 4.0 International License, which permits unrestricted use, distribution, and reproduction in any medium, even commercially as long as the original work is properly cited.