The Science and Information (SAI) Organization
  • Home
  • About Us
  • Journals
  • Conferences
  • Contact Us

Publication Links

  • IJACSA
  • Author Guidelines
  • Publication Policies
  • Digital Archiving Policy
  • Promote your Publication
  • Metadata Harvesting (OAI2)

IJACSA

  • About the Journal
  • Call for Papers
  • Editorial Board
  • Author Guidelines
  • Submit your Paper
  • Current Issue
  • Archives
  • Indexing
  • Fees/ APC
  • Reviewers
  • Apply as a Reviewer

IJARAI

  • About the Journal
  • Archives
  • Indexing & Archiving

Special Issues

  • Home
  • Archives
  • Proposals
  • Guest Editors
  • SUSAI-EE 2025
  • ICONS-BA 2025
  • IoT-BLOCK 2025

Future of Information and Communication Conference (FICC)

  • Home
  • Call for Papers
  • Submit your Paper/Poster
  • Register
  • Venue
  • Contact

Computing Conference

  • Home
  • Call for Papers
  • Submit your Paper/Poster
  • Register
  • Venue
  • Contact

Intelligent Systems Conference (IntelliSys)

  • Home
  • Call for Papers
  • Submit your Paper/Poster
  • Register
  • Venue
  • Contact

Future Technologies Conference (FTC)

  • Home
  • Call for Papers
  • Submit your Paper/Poster
  • Register
  • Venue
  • Contact
  • Home
  • Call for Papers
  • Editorial Board
  • Guidelines
  • Submit
  • Current Issue
  • Archives
  • Indexing
  • Fees
  • Reviewers
  • Subscribe

DOI: 10.14569/IJACSA.2025.0160695
PDF

Cross-Domain Evaluation of Large Language Models for Abstractive Text Summarization: An Empirical Perspective

Author 1: Walid Mohamed Aly
Author 2: Taysir Hassan A. Soliman
Author 3: Amr Mohamed AbdelAziz

International Journal of Advanced Computer Science and Applications(IJACSA), Volume 16 Issue 6, 2025.

  • Abstract and Keywords
  • How to Cite this Article
  • {} BibTeX Source

Abstract: Large Language Models (LLMs) have demon-strated remarkable capabilities in generating human-like text; however, their effectiveness in abstractive summarization across diverse domains remains underexplored. This study conducts a comprehensive evaluation of six open source LLMs across four datasets: CNN / Daily Mail and NewsRoom (news), SAMSum (dialogue) and ArXiv (scientific) using zero shot and in-context learning techniques. Performance was assessed using ROUGE and BERTScore metrics, and inference time was measured to examine the trade-off between accuracy and efficiency. For long documents, a sentence-based chunking strategy is introduced to overcome context limitations. Results reveal that in-context learning consistently enhances summarization quality, and chunking improves performance on long scientific texts. The model performance varies according to architecture, scale, prompt design, and dataset characteristics. The qualitative analysis further demonstrates that the top-performing models produce summaries that are coherent, informative, and contextually aligned with human-written references, despite occasional lexical divergence or factual omissions. These findings provide practical insights into designing instruction-based summarization systems using open-source LLMs.

Keywords: Large language models; natural language processing; automatic text summarization; prompt engineering; summarization evaluation

Walid Mohamed Aly, Taysir Hassan A. Soliman and Amr Mohamed AbdelAziz, “Cross-Domain Evaluation of Large Language Models for Abstractive Text Summarization: An Empirical Perspective” International Journal of Advanced Computer Science and Applications(IJACSA), 16(6), 2025. http://dx.doi.org/10.14569/IJACSA.2025.0160695

@article{Aly2025,
title = {Cross-Domain Evaluation of Large Language Models for Abstractive Text Summarization: An Empirical Perspective},
journal = {International Journal of Advanced Computer Science and Applications},
doi = {10.14569/IJACSA.2025.0160695},
url = {http://dx.doi.org/10.14569/IJACSA.2025.0160695},
year = {2025},
publisher = {The Science and Information Organization},
volume = {16},
number = {6},
author = {Walid Mohamed Aly and Taysir Hassan A. Soliman and Amr Mohamed AbdelAziz}
}



Copyright Statement: This is an open access article licensed under a Creative Commons Attribution 4.0 International License, which permits unrestricted use, distribution, and reproduction in any medium, even commercially as long as the original work is properly cited.

IJACSA

Upcoming Conferences

IntelliSys 2025

28-29 August 2025

  • Amsterdam, The Netherlands

Future Technologies Conference 2025

6-7 November 2025

  • Munich, Germany

Healthcare Conference 2026

21-22 May 2026

  • Amsterdam, The Netherlands

Computing Conference 2026

9-10 July 2026

  • London, United Kingdom

IntelliSys 2026

3-4 September 2026

  • Amsterdam, The Netherlands

Computer Vision Conference 2026

15-16 October 2026

  • Berlin, Germany
The Science and Information (SAI) Organization
BACK TO TOP

Computer Science Journal

  • About the Journal
  • Call for Papers
  • Submit Paper
  • Indexing

Our Conferences

  • Computing Conference
  • Intelligent Systems Conference
  • Future Technologies Conference
  • Communication Conference

Help & Support

  • Contact Us
  • About Us
  • Terms and Conditions
  • Privacy Policy

© The Science and Information (SAI) Organization Limited. All rights reserved. Registered in England and Wales. Company Number 8933205. thesai.org