The Science and Information (SAI) Organization
  • Home
  • About Us
  • Journals
  • Conferences
  • Contact Us

Publication Links

  • IJACSA
  • Author Guidelines
  • Publication Policies
  • Outstanding Reviewers

IJACSA

  • About the Journal
  • Call for Papers
  • Editorial Board
  • Author Guidelines
  • Submit your Paper
  • Current Issue
  • Archives
  • Indexing
  • Fees/ APC
  • Reviewers
  • Apply as a Reviewer

IJARAI

  • About the Journal
  • Archives
  • Indexing & Archiving

Special Issues

  • Home
  • Archives
  • Proposals
  • ICONS_BA 2025

Computer Vision Conference (CVC)

  • Home
  • Call for Papers
  • Submit your Paper/Poster
  • Register
  • Venue
  • Contact

Computing Conference

  • Home
  • Call for Papers
  • Submit your Paper/Poster
  • Register
  • Venue
  • Contact

Intelligent Systems Conference (IntelliSys)

  • Home
  • Call for Papers
  • Submit your Paper/Poster
  • Register
  • Venue
  • Contact

Future Technologies Conference (FTC)

  • Home
  • Call for Papers
  • Submit your Paper/Poster
  • Register
  • Venue
  • Contact
  • Home
  • Call for Papers
  • Editorial Board
  • Guidelines
  • Submit
  • Current Issue
  • Archives
  • Indexing
  • Fees
  • Reviewers
  • RSS Feed

DOI: 10.14569/IJACSA.2026.0170397
PDF

Domain-Agnostic Knowledge Graph Construction for Systematic Hallucination Reduction and Knowledge Reusability in Large Language Models

Author 1: Durvesh Narkhede
Author 2: Rama Gaikwad
Author 3: Saniya Jadhav
Author 4: Pratiksha Ovhal
Author 5: Nigam Roy
Author 6: Prasad Dhanade

International Journal of Advanced Computer Science and Applications(IJACSA), Volume 17 Issue 3, 2026.

  • Abstract and Keywords
  • How to Cite this Article
  • {} BibTeX Source

Abstract: Large Language Models (LLMs) have rapidly advanced the capabilities of automated reasoning and text generation, yet they continue to hallucinate when responding to domain-specific or rapidly evolving queries due to limitations in their static, parametric knowledge. This challenge is especially significant in high-stakes domains where factual accuracy is critical. To address this gap, the present study introduces a domain-agnostic framework called the Web-Constructed Knowledge Graph (WCKG), designed to ground LLM outputs in verifiable, web-retrieved information. Unlike conventional Retrieval-Augmented Generation (RAG) pipelines, WCKG transforms ad-hoc retrieval into structured, reusable knowledge through automated, query-triggered web searches that extract entities and relations and synthesize them into lightweight, provenance-aware knowledge graphs maintained locally within user sessions. A global registry stores only abstracted metadata, ensuring decentralized knowledge management and privacy while enabling efficient indexing and discovery. Web-grounded reasoning is achieved by serializing relevant graph fragments directly into LLM prompts. Experimental evaluation demonstrates that this framework generates coherent knowledge graphs, supports iterative refinement through user interactions, and improves the reliability of model responses across diverse domains, achieving an average hallucination reduction of 3.3% over a RAG baseline. The findings imply that WCKG can convert transient LLM interactions into evolving knowledge resources, offering a practical foundation for long-term reasoning, model adaptation, and decentralized knowledge sharing in future AI systems.

Keywords: Large Language Models; knowledge graph construction; hallucination reduction; Retrieval-Augmented Generation; web-grounded reasoning; decentralized knowledge systems

Durvesh Narkhede, Rama Gaikwad, Saniya Jadhav, Pratiksha Ovhal, Nigam Roy and Prasad Dhanade. “Domain-Agnostic Knowledge Graph Construction for Systematic Hallucination Reduction and Knowledge Reusability in Large Language Models”. International Journal of Advanced Computer Science and Applications (IJACSA) 17.3 (2026). http://dx.doi.org/10.14569/IJACSA.2026.0170397

@article{Narkhede2026,
title = {Domain-Agnostic Knowledge Graph Construction for Systematic Hallucination Reduction and Knowledge Reusability in Large Language Models},
journal = {International Journal of Advanced Computer Science and Applications},
doi = {10.14569/IJACSA.2026.0170397},
url = {http://dx.doi.org/10.14569/IJACSA.2026.0170397},
year = {2026},
publisher = {The Science and Information Organization},
volume = {17},
number = {3},
author = {Durvesh Narkhede and Rama Gaikwad and Saniya Jadhav and Pratiksha Ovhal and Nigam Roy and Prasad Dhanade}
}



Copyright Statement: This is an open access article licensed under a Creative Commons Attribution 4.0 International License, which permits unrestricted use, distribution, and reproduction in any medium, even commercially as long as the original work is properly cited.

IJACSA

Upcoming Conferences

Computer Vision Conference (CVC) 2026

21-22 May 2026

  • Amsterdam, The Netherlands

Computing Conference 2026

9-10 July 2026

  • London, United Kingdom

Artificial Intelligence Conference 2026

3-4 September 2026

  • Amsterdam, The Netherlands

Future Technologies Conference (FTC) 2026

15-16 October 2026

  • Berlin, Germany
The Science and Information (SAI) Organization
BACK TO TOP

Computer Science Journal

  • About the Journal
  • Call for Papers
  • Submit Paper
  • Indexing

Our Conferences

  • Computer Vision Conference
  • Computing Conference
  • Intelligent Systems Conference
  • Future Technologies Conference

Help & Support

  • Contact Us
  • About Us
  • Terms and Conditions
  • Privacy Policy

The Science and Information (SAI) Organization Limited is a company registered in England and Wales under Company Number 8933205.