Computer Vision Conference (CVC) 2026
21-22 May 2026
Publication Links
IJACSA
Special Issues
Computer Vision Conference (CVC)
Computing Conference
Intelligent Systems Conference (IntelliSys)
Future Technologies Conference (FTC)
International Journal of Advanced Computer Science and Applications(IJACSA), Volume 17 Issue 3, 2026.
Abstract: Contrastive learning (CL) based on Transformer sequence encoders offers a robust framework for sequential recommendation by effectively addressing data noise and sparsity issues. By utilizing the advantages of CL, these models are able to learn rich representations from sequences of user historical interactions, leading to improved recommendation and user satisfaction. However, recent CL methods are affected by two limitations. Firstly, CL approaches are mainly designed to process input sequences in single direction, i.e., left to right, which is sub-optimal for sequential prediction tasks because user historical interactions might not be in a fixed single direction sequence. Secondly, these models focus on designing CL objectives based solely on the input sequence, overlooking the valuable self-supervision features available as auxiliary information of descriptive text. To overcome these limitations, we introduce a new framework named Bi-Transformers aided Contextual Contrastive Learning for Sequential Recommendation (CCLRec). Specifically, bidirectional Transformers are extended to incorporate auxiliary information by using sentence embedding formulated from item’s textual description. Next, we introduce the rolling glass step technique for handling lengthy user sequence and descriptive features of corresponding item, which enables more refined partitioning of user sequences. Next, the cloze task, random occlusion, and dropout masking strategies are jointly applied to generate high-quality positive samples, enabling improved performance of the contrastive learning objective. Comprehensive experiments upon three benchmark datasets demonstrate that CCLRec consistently outperforms state-of-the-art baselines, achieving improvements of up to 5.69% to 6.34% in NDCG@10 across the MovieLens-1M, Amazon Beauty, and Amazon Toys datasets.
Adel Alkhalil, Ikhlaq Ahmed, Zafran Khan, Mazhar Abbas, Aakash Ahmad and Abdulrahman Albarrak. “Bi-Transformers-Aided Contextual Contrastive Learning for Sequential Recommendation”. International Journal of Advanced Computer Science and Applications (IJACSA) 17.3 (2026). http://dx.doi.org/10.14569/IJACSA.2026.01703108
@article{Alkhalil2026,
title = {Bi-Transformers-Aided Contextual Contrastive Learning for Sequential Recommendation},
journal = {International Journal of Advanced Computer Science and Applications},
doi = {10.14569/IJACSA.2026.01703108},
url = {http://dx.doi.org/10.14569/IJACSA.2026.01703108},
year = {2026},
publisher = {The Science and Information Organization},
volume = {17},
number = {3},
author = {Adel Alkhalil and Ikhlaq Ahmed and Zafran Khan and Mazhar Abbas and Aakash Ahmad and Abdulrahman Albarrak}
}
Copyright Statement: This is an open access article licensed under a Creative Commons Attribution 4.0 International License, which permits unrestricted use, distribution, and reproduction in any medium, even commercially as long as the original work is properly cited.