The Science and Information (SAI) Organization
  • Home
  • About Us
  • Journals
  • Conferences
  • Contact Us

Publication Links

  • IJACSA
  • Author Guidelines
  • Publication Policies
  • Digital Archiving Policy
  • Promote your Publication
  • Metadata Harvesting (OAI2)

IJACSA

  • About the Journal
  • Call for Papers
  • Editorial Board
  • Author Guidelines
  • Submit your Paper
  • Current Issue
  • Archives
  • Indexing
  • Fees/ APC
  • Reviewers
  • Apply as a Reviewer

IJARAI

  • About the Journal
  • Archives
  • Indexing & Archiving

Special Issues

  • Home
  • Archives
  • Proposals
  • GIDP 2026
  • ICONS_BA 2025

Computer Vision Conference (CVC)

  • Home
  • Call for Papers
  • Submit your Paper/Poster
  • Register
  • Venue
  • Contact

Computing Conference

  • Home
  • Call for Papers
  • Submit your Paper/Poster
  • Register
  • Venue
  • Contact

Intelligent Systems Conference (IntelliSys)

  • Home
  • Call for Papers
  • Submit your Paper/Poster
  • Register
  • Venue
  • Contact

Future Technologies Conference (FTC)

  • Home
  • Call for Papers
  • Submit your Paper/Poster
  • Register
  • Venue
  • Contact
  • Home
  • Call for Papers
  • Editorial Board
  • Guidelines
  • Submit
  • Current Issue
  • Archives
  • Indexing
  • Fees
  • Reviewers
  • RSS Feed

DOI: 10.14569/IJACSA.2025.0161057
PDF

Machine Learning-Driven Emotional Feedback Analysis and Adaptive Content Generation for VR Movie and TV Users

Author 1: Yun TANG

International Journal of Advanced Computer Science and Applications(IJACSA), Volume 16 Issue 10, 2025.

  • Abstract and Keywords
  • How to Cite this Article
  • {} BibTeX Source

Abstract: With the growing demand for immersive audiovisual experiences, user sentiment feedback analysis has become a pivotal factor in improving personalization and interactivity in virtual reality (VR) movie and television. This study proposes a machine learning–driven framework that integrates sentiment feedback recognition and adaptive content generation to optimize user experience. First, a Long Short-Term Memory (LSTM) model is developed to analyze multimodal sentiment feedback data, including physiological signals, behavioral responses, and interactive actions. The model achieves an average recognition accuracy of 75.75% across four basic emotions—happiness, sadness, anger, and fear—demonstrating its ability to capture dynamic and continuous emotional patterns. Based on real-time sentiment feedback, a Deep Q-Network (DQN) reinforcement learning algorithm is employed to generate adaptive VR content that aligns with users’ current emotional states. Experimental validation with 100 participants shows that adaptive content generation increases overall satisfaction scores from 6.2 to 7.8, and the matching degree between user emotions and content improves by more than 20%. The integration of sentiment feedback analysis and reinforcement learning establishes a closed feedback loop—emotion detection → adaptive adjustment → feedback optimization—that enhances immersion, empathy, and user engagement. This research provides a data-driven reference for the intelligent evolution of VR movie and television, and future work will expand to fine-grained emotional dimensions and multimodal fusion to improve recognition precision and real-time adaptive generation performance.

Keywords: Machine learning; VR movie and television; user sentiment feedback analysis; adaptive content generation; reinforcement learning

Yun TANG. “Machine Learning-Driven Emotional Feedback Analysis and Adaptive Content Generation for VR Movie and TV Users”. International Journal of Advanced Computer Science and Applications (IJACSA) 16.10 (2025). http://dx.doi.org/10.14569/IJACSA.2025.0161057

@article{TANG2025,
title = {Machine Learning-Driven Emotional Feedback Analysis and Adaptive Content Generation for VR Movie and TV Users},
journal = {International Journal of Advanced Computer Science and Applications},
doi = {10.14569/IJACSA.2025.0161057},
url = {http://dx.doi.org/10.14569/IJACSA.2025.0161057},
year = {2025},
publisher = {The Science and Information Organization},
volume = {16},
number = {10},
author = {Yun TANG}
}



Copyright Statement: This is an open access article licensed under a Creative Commons Attribution 4.0 International License, which permits unrestricted use, distribution, and reproduction in any medium, even commercially as long as the original work is properly cited.

IJACSA

Upcoming Conferences

Computer Vision Conference (CVC) 2026

21-22 May 2026

  • Amsterdam, The Netherlands

Computing Conference 2026

9-10 July 2026

  • London, United Kingdom

Artificial Intelligence Conference 2026

3-4 September 2026

  • Amsterdam, The Netherlands

Future Technologies Conference (FTC) 2026

15-16 October 2026

  • Berlin, Germany
The Science and Information (SAI) Organization
BACK TO TOP

Computer Science Journal

  • About the Journal
  • Call for Papers
  • Submit Paper
  • Indexing

Our Conferences

  • Computer Vision Conference
  • Computing Conference
  • Intelligent Systems Conference
  • Future Technologies Conference

Help & Support

  • Contact Us
  • About Us
  • Terms and Conditions
  • Privacy Policy

© The Science and Information (SAI) Organization Limited. All rights reserved. Registered in England and Wales. Company Number 8933205. thesai.org