The Science and Information (SAI) Organization
  • Home
  • About Us
  • Journals
  • Conferences
  • Contact Us

Publication Links

  • IJACSA
  • Author Guidelines
  • Publication Policies
  • Digital Archiving Policy
  • Promote your Publication
  • Metadata Harvesting (OAI2)

IJACSA

  • About the Journal
  • Call for Papers
  • Editorial Board
  • Author Guidelines
  • Submit your Paper
  • Current Issue
  • Archives
  • Indexing
  • Fees/ APC
  • Reviewers
  • Apply as a Reviewer

IJARAI

  • About the Journal
  • Archives
  • Indexing & Archiving

Special Issues

  • Home
  • Archives
  • Proposals
  • Guest Editors
  • SUSAI-EE 2025
  • ICONS-BA 2025
  • IoT-BLOCK 2025

Future of Information and Communication Conference (FICC)

  • Home
  • Call for Papers
  • Submit your Paper/Poster
  • Register
  • Venue
  • Contact

Computing Conference

  • Home
  • Call for Papers
  • Submit your Paper/Poster
  • Register
  • Venue
  • Contact

Intelligent Systems Conference (IntelliSys)

  • Home
  • Call for Papers
  • Submit your Paper/Poster
  • Register
  • Venue
  • Contact

Future Technologies Conference (FTC)

  • Home
  • Call for Papers
  • Submit your Paper/Poster
  • Register
  • Venue
  • Contact
  • Home
  • Call for Papers
  • Editorial Board
  • Guidelines
  • Submit
  • Current Issue
  • Archives
  • Indexing
  • Fees
  • Reviewers
  • Subscribe

IJACSA Volume 15 Issue 10

Copyright Statement: This is an open access publication licensed under a Creative Commons Attribution 4.0 International License, which permits unrestricted use, distribution, and reproduction in any medium, even commercially as long as the original work is properly cited.

View Full Issue

Paper 1: Development of an AI Based Failure Predictor Model to Reduce Filament Waste for a Sustainable 3D Printing Process

Abstract: This paper delves into the integration of motion tracking technology for real-time monitoring in 3D printing, with a focus on the popular fused filament fabrication (FFF) technique. Despite FFF's cost-efficiency, prevalent printing errors pose significant challenges to its commercial and environmental viability. This study proposes a solution by incorporating motion tracking nodes into the 3D printing process, tracked by cameras, enabling dynamic identification and rectification of printing failures. Addressing key research questions, the paper explores the applicability of motion tracking for failure detection, its impact on printed object quality, and the potential reduction in 3D printing waste. The proposed real-time monitoring system aims to fill a critical gap in existing 3D printing procedures, providing dynamic failure identification. The study integrates machine learning, computer vision, and motion tracking technologies, employing an inductive theoretical development strategy with active learning iterations for continuous improvement. Highlighting the revolutionary potential of 3D printing and acknowledging challenges in continuous monitoring and waste management, the suggested system pioneers real-time monitoring, striving to enhance efficiency, sustainability, and adaptability to diverse production demands. As the study progresses into implementation, it aspires to contribute significantly to the evolution of 3D printing technologies.

Author 1: Noushin Mohammadian
Author 2: Melissa Sofía Molina Silva
Author 3: Giorgi Basiladze
Author 4: Omid Fatahi Valilai

Keywords: 3D printing; Fused Filament Fabrication (FFF); motion tracking; environmental sustainability; printing waste reduction

PDF

Paper 2: Developing a Blockchain Based Supply Chain CO2 Footprint Tracking Framework Enabled by IoT

Abstract: In various industries, the convergence of the Internet of Things (IoT) and blockchain technologies has left an indelible mark on the pursuit of decarbonization. These innovations have seamlessly integrated into diverse fields, from manufacturing to logistics, offering sustainable solutions that enhance operational efficiency, transparency, and accountability. The interplay between IoT and blockchain has particularly contributed to the reduction of carbon footprints, fostering environmentally responsible practices. As industries embrace these technologies, the decentralized and transparent nature of blockchain ensures traceability in supply chains, while IoT devices facilitate real-time data monitoring. Together, they create a powerful synergy that not only streamlines processes but also drives a collective commitment to reducing environmental impact, marking a paradigm shift towards greener and more sustainable industries. Within this landscape, this research offers a comprehensive exploration of the transformative potential of blockchain in supply chain management, emphasizing its intricate connection with IoT and carbon footprint reduction. The conceptual model presented delineates the seamless integration of these elements, providing a nuanced understanding of how blockchain can revolutionize transparency and sustainability. Through practical examples and a layered diagram, it showcases the tangible benefits of this integration, highlighting its capacity to enhance data integrity and transparency in real-world supply chain scenarios. The research stands as a testament to the instrumental role that blockchain can play in fostering environmentally responsible practices within supply chains, laying the groundwork for a more sustainable future.

Author 1: Mohammad Yaser Mofatteh
Author 2: Roshanak Davallou
Author 3: Chaida Ndahiro Ishimwe
Author 4: Swaresh Suresh Divekar
Author 5: Omid Fatahi Valilai

Keywords: Blockchain; IoT; supply chain; carbon footprint; sustainability

PDF

Paper 3: Triggered Screen Restriction Framework: Transforming Gamified Physical Interventions

Abstract: This study examines the effectiveness of the Triggered Screen Restriction (TSR) framework, a novel technique to promote exercise that combines negative reinforcement with adaptive gamification elements. The study examined the TSR framework’s impact on physical activity levels, addictive nature, health indicators, psychological factors, and app usability compared to a control group. A mixed experimental design was employed, with 30 participants randomly assigned to either an experimental group using a custom iOS app with the TSR framework or a control group using a similar app without TSR features. Results revealed that the TSR group demonstrated significantly higher physical activity levels (p < .05). The TSR framework resulted in significant increases in app usage frequency (p < .001). Health indicators showed a significant improvement in balance and stability through the single-leg stance test (p < .05), while other health metrics, including maximum jumping jacks completed in one minute, post-exercise heart rate, and body composition, exhibited no significant changes. Analysis of psychological factors revealed a significant increase in perceived competence in the TSR group (p < .05), with no significant changes observed in autonomy or relatedness. The TSR intervention demonstrated significantly better usability metrics, including ease of use, system reliability, and perceived usefulness, compared to the control condition (all p < .001). The study contributes to the expanding adoption of gamified physical interventions, showcasing the TSR framework as an effective technique for addressing physical inactivity. Future research should explore long-term effectiveness, diverse populations, and integration with wearable devices to further validate and refine the TSR approach in addressing physical inactivity.

Author 1: Majed Hariri
Author 2: Richard Stone
Author 3: Ulrike Genschel

Keywords: Gamification; physical activity; negative reinforcement; triggered screen restriction framework; TSR framework; gamified physical intervention

PDF

Paper 4: AI in the Detection and Prevention of Distributed Denial of Service (DDoS) Attacks

Abstract: Distributed Denial of Service (DDoS) attacks are malicious attacks that aim to disrupt the normal flow of traffic to the targeted server or network by manipulating the server’s infrastructure with overflowing internet traffic. This study aims to investigate several artificial intelligence (AI) models and utilise them in the DDoS detection system. The paper examines how AI is being used to detect DDoS attacks in real-time to find the most accurate methods to improve network security. The machine learning models identified and discussed in this research include random forest, decision tree (DT), convolutional neural network (CNN), NGBoosT classifier, and stochastic gradient descent (SGD). The research findings demonstrate the effectiveness of these models in detecting DDoS attacks. The study highlights the potential for future enhancement of these technologies to enhance the security and privacy of data servers and networks in real-time. Using the qualitative research method and comparing several AI models, research results reveal that the random forest model offers the best detection accuracy (99.9974%). This finding holds significant implications for the enhancement of future DDoS detection systems.

Author 1: Sina Ahmadi

Keywords: Artificial intelligence; Distributed Denial of Service (Ddos); machine learning; detection; accuracy

PDF

Paper 5: A Hybrid Regression-Based Network Model for Continuous Face Recognition and Authentication

Abstract: This research proposes a continuous remote biometric user authentication system implemented with a face recognition model pre-trained on face images. This work develops an algorithm combining the Hybrid Block Overlapping KT Polynomials (HBKT) and Regression-based Support Vector Machine (RSVM) methods for a face recognition-based remote user authentication system that uses a model pre-trained on the ORL, Face94 and GT datasets to recognize authorized users from face images captured through a webcam for continuous remote biometric user authentication. HBKT polynomials enhance feature extraction by capturing local and global facial patterns, while RSVM improves classification performance through efficient regression-based decision boundaries. The system can continuously capture user face images from the user’s webcam for user authentication, but it can be affected by lighting variations, occlusion, and computational overhead from continuous image capture. This has been implemented in a Python program. The proposed method, when compared to previous state-of-the-art algorithms, was observed to have higher F-measure, accuracy, and speed, for most of the cases. The proposed method was observed to have accuracies of 98.82% (ORL dataset), 96.73% (GT dataset), and 95.9% (Face94 dataset).

Author 1: Bhanu Kiran Devisetty
Author 2: Ayush Goyal
Author 3: Avdesh Mishra
Author 4: Mais W Nijim
Author 5: David Hicks
Author 6: George Toscano

Keywords: Vision-based computing; object detection; face detection; face recognition; feature extraction; feature coefficients; classification; authentication; biometrics; biometric authentication

PDF

Paper 6: Designing Conversational Agents for Student Wellbeing

Abstract: The innovative development of AI technology provides new possibilities and new solutions to the problems that we are facing in modern society. Student well-being status has been a major concern in well-being care, especially in the post-pandemic era. The availability and quality of well-being support have limited the accessibility of well-being resources to students. Using conversational agents (CA) or chatbots to empower student well-being care is a promising solution for universities, considering the availability and cost of implementation. This research aims to explore how CAs assist students with possible well-being concerns. We invited 96 participants to fill out surveys with their demographic information and 11 short answer questions concerning their well-being and their acceptance and expectations of CAs. The results suggested the participants accepted the use of well-being CAs with ethical concerns. Upon user acceptance, the participants expressed expectations on design features such as facial expression recognition, translation, images, personalized long-term memory, etc. Based on the results this work presents a conceptual framework and chat flows for the design of a student well-being chatbot, which provides a user-centered design example for UX designers of wellbeing. Further research will introduce detailed design discoveries and a high-fidelity CA prototype to shed light on student well-being support applications. Implementing the CA will enhance the accessibility and quality of student well-being services, fostering a healthier campus environment.

Author 1: Jieyu Wang
Author 2: Li Zhang
Author 3: Dingfang Kang
Author 4: Katherina G. Pattit

Keywords: Conversational agent/chatbot; wellbeing; UX design

PDF

Paper 7: Data Encoding with Generative AI: Towards Improved Machine Learning Performance

Abstract: This article explores the design and implementation of a Generative AI-based data encoding system aimed at enhancing human resource management processes. Addressing the complexity of HR data and the need for informed decision-making, the study introduces a novel approach that leverages Generative AI for data encoding. This approach is applied to an HR database to develop a machine learning model designed to create a salary simulator, capable of generating accurate and personalized salary estimates based on factors such as work experience, skills, geographical location, and market trends. The aim of this approach is to improve the performance of the machine learning model. Experimental results indicate that this encoding approach improves the accuracy and fairness of salary determinations. Overall, the article demonstrates how AI can revolutionize HR management by delivering innovative solutions for more equitable and strategic compensation practices.

Author 1: Abdelkrim SAOUABE
Author 2: Hicham OUALLA
Author 3: Imad MOURTAJI

Keywords: Data encoding; Generative AI; salary simulator; human resource management; machine learning

PDF

Paper 8: Cross-Modal Hash Retrieval Model for Semantic Segmentation Network for Digital Libraries

Abstract: Retracted: After careful and considered review of the content of this paper by a duly constituted expert committee, this paper has been found to be in violation of IJACSA`s Publication Principles. We hereby retract the content of this paper. Reasonable effort should be made to remove all past references to this paper.

Author 1: Siyu Tang
Author 2: Jun Yin

Keywords: Digital library; hash retrieval; semantic segmentation; word2vec; fully convolutional neural network

PDF

Paper 9: AI-Powered Waste Classification Using Convolutional Neural Networks (CNNs)

Abstract: In Malaysia, approximately 70%-80% of recyclable materials end up in landfills due to low participation in Separation at Source Initiative. This is largely attributed to the public perception that waste segregation is a foreign idea, coupled with a lack of public knowledge. Around 72.19% of the residents are unsure about waste categorization and proper waste disposal. This confusion leads to apathy toward recycling efforts exacerbated by deficient environmental awareness. Existing waste classification systems mainly rely on manual entry of waste item names, leading to inaccuracies and lack of user engagement, prompting a shift towards advanced deep learning models. Moreover, current systems often fail to provide comprehensive disposal guidelines, leaving users uninformed. This project addresses the gap by specifically developing an AI-Powered Waste Classification System incorporated with Convolutional Neural Network (CNN), classifying waste technologically and providing environmentally friendly disposal guidelines. By leveraging primary and secondary waste image data, the project achieves a training accuracy of 80.66% and a validation accuracy of 77.62% in waste classification. The uniqueness of this project lies in its utilization of CNN within a user-friendly web interface that allows the user to capture or upload waste image, offering immediate waste classification results and sustainable waste disposal guidelines. It also enables users to locate recycling centers and access the dashboard. This system represents an ongoing effort to educate people and contribute to the field of waste management. It promotes Sustainability Development Goal (SDG) 12 (Responsible Consumption and Production) and SDG 13 (Climate Action), contributes zero waste, raises environmental awareness, and aligns with Malaysia's goals to increase recycling rates to 40% and reduce waste sent to landfills by 2025.

Author 1: Chan Jia Yi
Author 2: Chong Fong Kim

Keywords: Convolutional neural networks; CNN; deep learning; waste classification; recycling; zero waste; SDGs

PDF

Paper 10: Attention Mechanism-Based CNN-LSTM for Abusive Comments Detection and Classification in Social Media Text

Abstract: Human contact with one another through social networks, blogs, forums, and online news portals and communication has dramatically increased in recent years. People use these platforms to express their feelings, but sometimes hateful comments are also spread. When abusive language is used in online comments to attack individuals such as celebrities, politicians, and products, as well as groups of people associated with a given country, age, or religion, cyberbullying begins. Due to the ever-growing number of messages, it is challenging to manually recognize these abusive comments on social media platforms. This research work concentrates on a novel attention mechanism-based hybrid Convolutional Neural Network - Long Short Term Memory (CNN-LSTM) model to detect abusive comments by getting more contextual information from individual sentences. The proposed attention mechanism-based hybrid CNN-LSTM model is compared with various models on the dataset provided by the shared task on Abusive Comment Detection in Tamil – ACL 2022 which contains 9 class labels such as Misandry, Counter-speech, Xenophobia, Misogyny, Hope-speech, Homophobia, Transphobic, Not-Tamil and None-of-the-above. We obtained an accuracy of 67.14%, 68.92%, 65.35% and 68.75% on Naïve Bayes, Support Vector Machine, Logistic Regression and Random Forest respectively. Furthermore, we applied the same dataset to deep learning models like Convolutional Neural Networks (CNN), Long Short Term Memory (LSTM), Bidirectional-Long Short Term Memory (Bi-LSTM) and obtained an accuracy of 70.28%, 71.67% and 69.45%, respectively. To obtain more contextual information semantically a novel attention mechanism is applied to the hybrid CNN-LSTM model and obtained an accuracy of 75.98% which is an improvement over all the developed models as a process innovation.

Author 1: BalaAnand Muthu First
Author 2: Kogilavani Shanmugavadive
Author 3: Veerappampalayam Easwaramoorthy Sathishkumar
Author 4: Muthukumaran Maruthappa
Author 5: Malliga Subramanian
Author 6: Rajermani Thinakaran

Keywords: Attention mechanism; hybrid CNN-LSTM model; machine learning model; deep learning model; abusive comments detection

PDF

Paper 11: Knowledge Graph-Based Badminton Tactics Mining and Reasoning for Badminton Player Training Pattern Analysis and Optimization

Abstract: As the global emphasis on sports data analysis and athlete performance optimization continues to grow, traditional badminton training methods are increasingly insufficient to meet the demands of modern high-level competitive sports. The exploration and reasoning of badminton tactics can significantly aid coaches and athletes in better comprehending game strategies, playing a vital role in the analysis and optimization of training methods. By utilizing knowledge graph-based badminton tactics mining, an approach involving heterogeneous graph splitting is employed, coupled with the incorporation of a cross-relational attention mechanism within relational graph neural networks. This mechanism assigns varying weights based on the importance of neighboring nodes across different relations, facilitating information aggregation and dissemination across multiple relationships. Furthermore, to address the challenges posed by the complexity of large-scale knowledge graphs, which feature numerous entity relationships and intricate internal structures, techniques such as training subgraph sampling, positive-negative sampling, and block-diagonal matrix decomposition are introduced. These techniques help to reduce the computational load and complexity of model training, while also enhancing the model's generalization capabilities. Finally, comparative experiments conducted on a proprietary badminton tactics dataset demonstrated the effectiveness and superiority of the proposed model improvements when reasonable parameters were applied. The case study shows that this approach holds considerable promise for the analysis and optimization of badminton players' training strategies.

Author 1: Xingli Hu
Author 2: Jiangtao Li
Author 3: Ren Cai

Keywords: Badminton tactical analysis; graph neural networks; attention mechanisms; training pattern optimization; heterogeneous graph splitting; artificial intelligence

PDF

Paper 12: Analysis and Usability Evaluation of Virtual Reality in Cultural Landscape Promotion Platform Application

Abstract: In order to improve the efficiency of analysing the application of virtual reality technology in cultural landscape promotion platform and enhance the accuracy of usability assessment, a feasibility assessment method based on MA-BiGRU for virtual reality technology cultural landscape promotion platform is proposed. Firstly, it analyses the application of virtual reality technology in cultural landscape promotion platform and designs the application feasibility assessment; secondly, it combines MA algorithm and BiGRU network, and proposes a usability assessment algorithm of virtual reality technology based on MA-BiGRU model; lastly, it analyses the feasibility and validity of the proposed method by using actual cases. The results show that, compared with other models conducted, the proposed method has a higher assessment and prediction accuracy, and it also effectively assists in completing the virtual reproduction of the cultural landscape promotion platform of Dongao Deng Village, Dongtou District, Wenzhou City, and improves the design effect of virtual reality technology in the cultural landscape promotion platform.

Author 1: Yufang Huang
Author 2: Yin Luo
Author 3: Jingyu Zheng
Author 4: Xunxiang Li

Keywords: Virtual reality technology; cultural landscape promotion platform; usability assessment algorithm; Mayfly algorithm

PDF

Paper 13: Migrating from Monolithic to Microservice Architectures: A Systematic Literature Review

Abstract: Migration from monolithic software systems to modern microservice architecture is a critical process for enhancing software systems' scalability, maintainability, and performance. This study conducted a systematic literature review to explore the various methodologies, techniques, and algorithms used in the migration of monolithic systems to modern microservice architectures. Furthermore, this study underscored the role of artificial intelligence in enhancing the efficiency and effectiveness of the migration process by examining recent literature to identify significant patterns, challenges, and optimal solutions. In addition, it emphasizes the importance of migrating monolithic systems into microservices by synthesizing various research studies that enable greater flexibility, fault tolerance, and independent scalability. The findings offer valuable insights for both researchers and practitioners in the software industry. In addition, it provides practical guidance on implementing AI-driven methodologies in software architecture evolution. Finally, we highlight future research directions in providing an automation technique for the software architecture migration.

Author 1: Hossam Hassan
Author 2: Manal A. Abdel-Fattah
Author 3: Wael Mohamed

Keywords: Software migration; software evolution; monolithic architecture; microservice architecture; systematic literature review

PDF

Paper 14: A Review on NS Beyond 5G: Techniques, Applications, Challenges and Future Research Directions

Abstract: With the advent of the fifth generation (5G) era, many emerging Internet of Things (IoT) applications have emerged to make life more convenient and intelligent. While the number of connected devices is growing, there are also differences in the network requirements of each device. Network slicing (NS) as an emerging technology, provides multiple logical networks for infrastructure. Each of these logical networks can provide specialized services for different needs of different applications by defining its logical topology, reliability and security level. This article provides an overview of the basic architecture, categories, and life cycle of network slicing. Then summarized two kinds of resource allocation methods and security problems based on three kinds of network slicing technologies. With the investigation of recent studies, it is found that network slicing is widely used in Industrial Internet of Things (IIoT), Internet of Medical Things (IoMT), in-vehicle systems and other applications. It improves network efficiency, improves service quality and enhances security and privacy by optimizing indicators such as latency, resource management and service quality. At the end of this study, according to the challenges faced by different research methods, the future research direction of network slicing is proposed.

Author 1: Cui Zhiyi
Author 2: Azana Hafizah Mohd Aman
Author 3: Faizan Qamar

Keywords: 5G; network slicing; resource allocation; dynamic allocation; security

PDF

Paper 15: Real Time Object Detection for Sustainable Air Conditioner Energy Management System

Abstract: Air conditioning has become indispensable for maintaining human comfort, especially during hot weather, as people rely on it to stay cool indoors. However, the long-term and uncontrolled use of air conditioners has significantly contributed to climate change and environmental degradation. The extensive use of air conditioners releases more carbon dioxide, a greenhouse gas, into the atmosphere, exacerbating global warming and leading to adverse climate impacts. The proposed sustainable air conditioning energy management system aims to address this issue by optimising air conditioner use while minimising its environmental footprint and mitigating climate change. Current air conditioning systems in offices, buildings, and homes typically rely on fixed temperature settings, leading to excessive energy consumption and increased greenhouse gas emissions. Existing solutions, such as fixed timers, manual timer settings, and physical controllers, are ineffective as they cannot dynamically respond to changes in environmental conditions, such as room occupancy and activity levels, resulting in significant inefficiencies and environmental hazards. To overcome these limitations, the proposed system introduces an innovative solution using software engineering technology, specifically real-time object detection, to control air conditioning energy usage. This approach redefines air conditioning management by allowing the system to dynamically adapt to room occupancy, environmental factors, and activity levels, ensuring the right amount of cooling is delivered at the right time. This method represents a concrete and effective response to climate change challenges and demonstrates a commitment to creating a sustainable and environmentally responsible future.

Author 1: Chang Shi Ying
Author 2: C. PuiLin

Keywords: Deep learning; energy consumption; energy efficiency; global warming; climate change; real-time object detection; Air conditioner optimization; smart meter; environmental footprint; climate action

PDF

Paper 16: The Review of Malaysia Digital Health Service Mobile Applications’ Usability Design

Abstract: Digital health services have become a trend and receive higher demand in Malaysia. However, the adoption of mobile applications to support the digital health service in the country remains low especially among older adults, contributing to low usability support of the mobile applications. This paper reviews the usability models and design factors that are relevant and applicable to support the digital health service mobile applications’ design for older adults. Seven usability design factors such as efficiency, help and documentation, learnability, memorability, user-friendliness, need-base, and push-base were discovered to be most suitable to support older adult users. Subsequently, a review was conducted on the fulfilment of seven usability design factors in key Malaysian digital health service mobile applications. Findings showed that most applications supported high learnability and memorability but lacked support for another five usability factors. Lastly, a usability design framework to support the Malaysia digital health service mobile applications for older adult users would be proposed. A full exploratory study is the next step to validate the proposed framework.

Author 1: Kah Hao Lim
Author 2: Chia Yean Lim
Author 3: Anusha Achuthan
Author 4: Chin Ernst Wong
Author 5: Vina Phei Sean Tan

Keywords: Health system accessible; ISO/IEC9126; Nielsen usability model; older adults; usability

PDF

Paper 17: Badminton Tracking and Motion Evaluation Model Based on Faster RCNN and Improved VGG19

Abstract: Badminton, as a popular sport in the field of sports, has rich information on body motions and motion trajectories. Accurately identifying the swinging motions during badminton is of great significance for badminton education, promotion, and competition. Therefore, based on the framework of Faster R-CNN multi object tracking algorithm, a new badminton tracking and motion evaluation model is proposed by introducing a VGG19 network architecture and real-time multi person pose estimation algorithm for performance optimization. The experimental results showed that the new badminton tracking and motion evaluation model achieved an average processing speed of 31.02 frames per second for five bone points in the human head, shoulder, elbow, wrist, and neck. Its accuracy in detecting the highest percentage of correct key points for the head, shoulders, elbows, wrists, and neck reached 98.05%, 98.10%, 97.89%, 97.55%, and 98.26%, respectively. The minimum values of mean square error and mean absolute error were only 0.021 and 0.026. The highest resource consumption rate was only 6.85%, and the highest accuracy of motion evaluation was 97.71%. In addition, indoor and outdoor environments had almost no impact on the performance of the model. In summary, the study aims to improve the fast region convolutional neural network and apply it to badminton tracking and motion evaluation with higher effectiveness and recognition accuracy. This study aims to demonstrate a more effective approach for the development of badminton sports.

Author 1: Jun Ou
Author 2: Chao Fu
Author 3: Yanyun Cao

Keywords: Faster RCNN; VGG19; badminton; target tracking; motion evaluation

PDF

Paper 18: Cyberbullying Detection on Social Networks Using a Hybrid Deep Learning Architecture Based on Convolutional and Recurrent Models

Abstract: This research paper explores the development and efficacy of a hybrid deep learning architecture for cyberbullying detection on social media platforms, integrating Convolutional Neural Networks (CNNs) and Long Short-Term Memory (LSTM) networks. By leveraging the strengths of both CNNs and LSTMs, the model aims to enhance the accuracy and sensitivity of detecting cyberbullying incidents. The study systematically evaluates the performance of the proposed model through a series of experiments involving a diverse dataset derived from various social media interactions, categorized by sentiment and type of bullying. Results indicate that while the model achieves high accuracy in identifying cyberbullying, challenges such as overfitting and the need for better generalization to unseen data persist. The paper also discusses ethical considerations and the potential for bias in automated monitoring systems, stressing the importance of ethical AI practices in social media governance. The findings underscore the complexity of automated cyberbullying detection and highlight the necessity for advanced machine learning techniques that are robust, scalable, and aligned with ethical standards. This study contributes to the broader discourse on the application of artificial intelligence in enhancing digital safety and advocates for a multidisciplinary approach to address the socio-technical challenges posed by cyberbullying in the digital age.

Author 1: Aigerim Altayeva
Author 2: Rustam Abdrakhmanov
Author 3: Aigerim Toktarova
Author 4: Abdimukhan Tolep

Keywords: Cyberbullying detection; deep learning; CNN; LSTM; social media monitoring; sentiment analysis; digital safety

PDF

Paper 19: Optimization with Adaptive Learning: A Better Approach for Reducing SSE to Fit Accurate Linear Regression Model for Prediction

Abstract: The Optimization provides a way through which an optimum can be achieved. It is all about designing accurate and optimal output for a given problems with using minimum available resources. It is a task which refers to minimizing an objective function f(x) parameterized by x or it is the task which refers minimizing the cost function using the model’s parameters. In machine learning optimization is slightly different. Usually most of the problems, are very much aware about shape, size and type of data. Such information helps us to know where need improve. In case of machine learning optimization works perfectly when there is no knowledge about new data. The method proposed in this paper is named as Optimization with adaptive learning which is used to minimize the cost in term of number of iterations for linear regression to fit the correct line for given dataset to reduce residual error. In regression analysis a curve or line fit in such a way for the data objects, that the differences of distances between the data points and curve or line is always minimum. Proposed approach Initialize random values for parameters of linear model and calculate Error (SSE). Our objective is minimizing the values of SSE, if SSE is large, need to adjust the selected initial values. The size of the step used in each iteration is direction movement to reach the local minimum for optimal value. After performing certain repetitions of the deviation, minimum value for SSE has found and it has a stable value with no change. Real life data set have been used for expositional analysis.

Author 1: Vijay Kumar Verma
Author 2: Umesh Banodha
Author 3: Kamlesh Malpani

Keywords: Adaptive learning; regression; optimization; minimum; cost; objective; error; random

PDF

Paper 20: Efficient Remote Health Monitoring Using Deep Learning and Parallel Systems

Abstract: This study presents a novel approach for non-contact extraction of physiological parameters, such as heart rate and respiratory rate, from facial images captured using RGB cameras, leveraging recent advancements in deep learning and signal processing techniques. The proposed system integrates Artifacts intelligent-driven algorithms for accurately estimating vital signs, addressing key challenges such as variations in lighting conditions, facial orientation, and noise. The system is implemented on both a naive homogeneous architecture and an optimized heterogeneous CPU-GPU system to enhance real-time performance and computational efficiency. A comparative analysis is performed to evaluate processing speed, accuracy, and resource utilization across both architectures. Results demonstrate that the optimized heterogeneous system significantly outperforms the homogeneous counterpart, achieving faster processing times while maintaining high accuracy levels. This performance boost is achieved through parallel computing frameworks such as OpenMP and OpenCL, which allow for efficient resource allocation and scalability. The research highlights the potential of heterogeneous architectures for real-time healthcare applications, including remote patient monitoring and telemedicine, providing a robust solution for future developments in non-invasive health technology.

Author 1: Zakaria El Khadiri
Author 2: Rachid Latif
Author 3: Amine Saddik

Keywords: Real-time healthcare; embedded systems; heterogeneous computing; deep learning; CPU-GPU architecture

PDF

Paper 21: Smart Muni Platform: Efficient Emergency and Citizen Security Management Based on Geolocation, Technological Integration and Real Time Communication

Abstract: The global increase in crime across cities has led to the development and implementation of technological solutions, such as the Smart Muni platform, designed to enhance citizen security. This platform integrates geolocation, real-time notifications, and a digital panic button to optimize emergency management and coordination between citizens and authorities. Developed using a structured approach that included requirements analysis, system design, development, testing, validation, deployment, and maintenance, the platform employs advanced technologies such as Firebase, Amazon S3, and Twilio, ensuring scalability, high availability, and seamless communication. Initially implemented in two districts with high crime rates, Smart Muni registered an average of 10 daily alerts, with peaks of up to 50 alerts in a single day. The system has proven effective in managing frequent incidents like alcoholism and domestic violence, significantly reducing response times and improving coordination. Despite its success, Smart Muni faces challenges related to optimizing its resilience against potential system failures and improving its ability to handle increased data loads. In comparison to other international systems, Smart Muni's flexible and scalable architecture stands out, though future enhancements are required to further strengthen the system’s reliability and expand its features. Overall, Smart Muni has proven to be a valuable tool in improving citizen security, fostering stronger relationships between citizens and authorities, and contributing to a safer community.

Author 1: Gerson Castro-Chucan
Author 2: Hiroshi Chalco-Peñafiel
Author 3: Norka Bedregal-Alpaca
Author 4: Victor Cornejo-Aparicio

Keywords: Citizen security; geolocation; emergency management; cloud technology; alert platform

PDF

Paper 22: Serious Games Model for Higher-Order Thinking Skills in Science Education

Abstract: The popularity of digital games has led to the emergence of serious games, which are developed with specific purposes beyond mere entertainment. Serious games in education represent more innovative and current pedagogical approaches. However, the existing digital games have been shown to improve critical thinking skills, although there is still a limited amount of research on science education. A preliminary study has found that digital games developed for science teaching do not incorporate all aspects of Higher-Order Thinking Skills (HOTS). This study aims to identify and validate game components and design a serious game model for HOTS in science education (PKBATDPS Model), which was validated using the Electric Circuit prototype. The study is divided into four phases: analysis, design, development and evaluation. During the analysis phase, the components of the PKBATDPS model were identified. The Electric Circuit prototype was evaluated using a quasi-experimental procedure that included pre-tests, post-tests, and learning motivation questionnaires. The experiment involved 32 elementary students; 16 in the experimental group used the serious games application prototype, whereas 16 in the control group received the traditional method. The results show that the PKBATDPS Model can be effectively used to increase students’ HOTS and motivation in science education.

Author 1: Siti Norliza Awang Noh
Author 2: Hazura Mohamed
Author 3: Nor Azan Mat Zin

Keywords: Serious game; Higher-Order Thinking (HOT) skills; science education; game element; learning element

PDF

Paper 23: Lung CT Image Classification Algorithm Based on Improved Inception Network

Abstract: With the continuous development of digital technology, traditional lung computed tomography medical image processing has problems such as complex images, small sample data, and similar symptoms between diseases. How to efficiently process lung computed tomography image classification has become a technical challenge. Based on this, the Inception algorithm is fused with the improved U-Net fully convolutional network to construct a lung computed tomography image classification algorithm model based on the improved Inception network. Subsequently, the Inception algorithm is compared with other algorithms for performance analysis. The results show that the proposed algorithm has the highest accuracy of 92.7% and the lowest error rate of 0.013%, which is superior to the comparison algorithm. In terms of recall comparison, the algorithm is approximately 0.121 and 0.213 higher than ResNet and GoogLeNct algorithms, respectively. In comparison with other models, the proposed model has a classification accuracy of 98.1% for viral pneumonia, with faster convergence speed and fewer required parameters. From this result, the proposed Inception network based lung computed tomography image classification algorithm model can efficiently process data information, provide technical support for lung computed tomography image classification, and thereby improve the accuracy of lung disease diagnosis.

Author 1: Qianlan Liu

Keywords: Image classification; inception; lung CT images; CNN; machine learning

PDF

Paper 24: Multilevel Characteristic Weighted Fusion Algorithm in Domestic Waste Information Classification

Abstract: The study of domestic waste image classification holds significant significance for fields like environmental protection and smart city development. To improve the classification efficiency of household waste information, a multi-feature weighted fusion method for household waste image classification is proposed. In this research, deep learning technology was applied to develop a multi-level feature-weighted fusion network model for domestic garbage image classification. The study first analyzed the VGG-16 architecture and created a garbage image dataset for domestic garbage according to the current Shenzhen garbage classification standard. Based on this, a multi-level feature-weighted fusion model for garbage image classification was constructed using VGG-16 as the backbone network. Furthermore, it was combined with the backbone feature extraction network as well as the content-aware and boundary-aware feature extraction networks. The performance of the classification model was tested, and it was found that the highest classification accuracy of the classification model can reach 0.98, and the shortest classification time is only 3s. The multi-level feature-weighted fusion garbage image classification model constructed in this research not only has better classification performance, but also can provide a new processing idea for the urban garbage classification problem.

Author 1: Min Li

Keywords: Multi-feature; weighted fusion; image; deep learning; waste classification

PDF

Paper 25: Novel Biomarkers for Colorectal Cancer Prediction

Abstract: Most researchers work on solving the important issue of identifying biomarkers linked to a certain disease, like cancer, in order to assist in the disease’s diagnosis and treatment. Several research have recently suggested several methods for identifying genes linked to disease. A handful of these methods were created specifically for CRC gene prediction, though. This research presents a novel prediction technique to determine new biomarkers related to CRC that can assist in the diagnosing process. First, we preprocessed four Microarray datasets (GSE4107, GSE8671, GSE9348 and GSE32323) using RMA (Robust Multi-Array Average) method to remove local artifacts and normalize the values. Second, we used the chi-squared test for feature selection to identify some significant features from datasets. Finally, the features were fed to XGBoost (eXtreme Gradient Boosting) to diagnose various test scenarios. The proposed model achieves a high mean accuracy rate and low standard deviation. When compared to other systems, the experiment findings show promise. The predicted biomarkers are validated through a review of the literature.

Author 1: Mohamed Ashraf
Author 2: M. M. El-Gayar
Author 3: Eman Eldaydamony

Keywords: Colorectal cancer (CRC); microarray; biomarkers; gene expression omnibus; feature selection; chi-squared test; XGBoost

PDF

Paper 26: Environmental and Economic Benefit Analysis of Urban Construction Projects Based on Data Envelopment Analysis and Simulated Annealing Algorithm

Abstract: With the continuous advancement of urbanization and the sustained growth of urban population, city building projects are facing severe challenges. How to analyze their environmental and economic benefits has become an urgent problem to be solved. Therefore, based on the proposed method for calculating the environmental and economic benefits of city building projects, this study uses a cross efficiency data envelopment analysis model for evaluation and solution. Then, an improved simulated annealing algorithm is used to achieve environmental and economic benefit optimization. The results showed that the improved simulated annealing algorithm tended to stabilize after 480 iterations, with maximum and minimum values of 0.86 and 0.21, respectively. The maximum F1 value was 0.988, indicating better performance. In the selected three urban construction projects, the cross efficiency data envelopment analysis model achieved high environmental and economic benefits, demonstrating the effectiveness of the model. After optimizing using the improved simulation degradation algorithm, the maximum economic benefit was increased by 850000 yuan, proving the effectiveness of the proposed method in analyzing the environmental and economic benefits of urban construction projects. It can provide more scientific decision support for construction project planning.

Author 1: Jie Gong

Keywords: DEA; simulated annealing algorithm; city building; environment; economics; benefit

PDF

Paper 27: Improving the Accuracy of Chili Leaf Disease Classification with ResNet and Fine-Tuning Strategy

Abstract: Lack of diseases detection in plants frequently results in the spread of diseases that are difficult to treat and expensive. Rapid diseases recognition enables farmers to control the diseases with appropriate treatment. This study aims to support chili farmers in identifying chili plant diseases based on leaf images. This work presents a CNN design based on several existing CNN architectures that have been fine-tuned to achieve the highest possible accuracy. The study found that the ResNet101 model with the Tanh activation function, SGD optimizer, and Reduced Learning Rate (ReduceLR) schedule, achieved a peak classification accuracy of 99.53%. This significant improvement demonstrates the potential of using advanced CNN techniques and fine-tuning strategies to enhance model accuracy in agricultural applications. The implications of this study extend to the field of precision agriculture, suggesting that the proposed model can be integrated into smart farming systems to improve the timely and efficient control of chili leaf diseases. Such advancements not only enhance crop yields but also contribute to sustainable agricultural practices and the economic stability of chili farmers.

Author 1: Sayuti Rahman
Author 2: Rahmat Arief Setyadi
Author 3: Asmah Indrawati
Author 4: Arnes Sembiring
Author 5: Muhammad Zen

Keywords: Chili leaf classification; convolutional neural network; ResNet10; fine-tuning; precision agriculture

PDF

Paper 28: Exploring the Impact of User Experience Elements on Virtual Reality for Emotion Regulation Through mVR-Real App

Abstract: Virtual reality, a rapidly advancing technological development, has significantly evolved over the past few years, offering immersive experiences through headsets, gloves, or controllers that engage users in dynamic and captivating environments. Its applications span various fields, including video games, healthcare, education, and training simulations. However, there remains a gap in utilizing virtual reality for adolescent emotion regulation. This study explores the mVR-REAL app, designed to enhance social-emotional learning in teenagers by improving emotion regulation. Social-emotional learning is an educational approach aimed at developing essential socio-emotional skills in adolescents. Sixteen participants engaged with the mVR-REAL app through virtual scene episodes using the Meta Quest 2 headset. The emotional responses and user feedback were measured using pre and post-test questionnaires with a Likert scale such as, strongly agree, agree, neutral, disagree and strongly disagree. A numerical value was assigned to each hypothetical option on the scale. Analysis conducted with SPSS software revealed statistically significant improvements in user experience following the use of mVR-REAL. The findings suggest that mVR-REAL has the potential to enhance user experience, evoke strong emotional responses, and foster greater engagement compared to traditional applications. These insights will inform future large-scale testing of mVR-REAL, emphasizing the importance of emotional design, as well as psychological and cultural factors, in the development of virtual reality apps for emotion regulation. However, time-related challenges were identified due to the restricted duration of the VR session, highlighting the need for further research and refinement in future virtual reality app development.

Author 1: Irna Hamzah
Author 2: Ely Salwana
Author 3: Nilufar Baghaei

Keywords: Virtual reality; app; user experience; emotion regulation; social-emotional learning

PDF

Paper 29: Enhancing Emotion Regulation Through Virtual Reality Design Framework for Social-Emotional Learning (VRSEL)

Abstract: Virtual reality (VR) has swiftly progressed, transitioning from a niche technology primarily associated with gaming to a versatile tool with broad applications across entertainment, healthcare, education, and beyond. Social-emotional learning (SEL) is increasingly recognized for its role in enhancing individuals' social skills and emotional regulation. However, despite the growing body of research on VR, the development of specific design frameworks for integrating VR with SEL remains underexplored. This study addresses this gap by employing thematic analysis to identify the critical components necessary for a Virtual Reality Design Framework for Social-Emotional Learning (VRSEL) aimed at improving emotion regulation among Malaysian adolescents. Through qualitative data derived from expert interviews in SEL and VR, this research proposes a framework that leverages immersive VR technology to create realistic, interactive scenarios that facilitate the practice and development of social-emotional skills. The framework emphasizes key design principles, including user interface (UI), presentation layer (PL), and brain activity (BA). Our findings suggest that VRSEL is a powerful tool for SEL, offering significant potential for educational environments. However, challenges such as technical barriers, content development, and educator training must be addressed to fully realize its benefits. This research highlights the promising role of VR in advancing SEL and lays the groundwork for further exploration and refinement of VRSEL in diverse educational settings.

Author 1: Irna Hamzah
Author 2: Ely Salwana
Author 3: Nilufar Baghaei
Author 4: Mark Billinghurst
Author 5: Azhar Arsad

Keywords: Virtual reality; design; framework; social-emotional learning; emotion regulation

PDF

Paper 30: Bridging the Gap: Machine Learning and Vision Neural Networks in Autonomous Vehicles for the Aging Population

Abstract: As autonomous vehicles (AVs) evolve recently, it is necessary to address the unique needs of the aging population group. They can get a significant benefit from this technology. This scoping review focus the role of machine learning and vision neural networks in autonomous vehicles. A focus on enhancing safety, usability, and trust for elderly users will be mentioned as well. It systematically reviews existing literature to identify how these technologies address the cognitive and physical challenges faced by older adults. The review highlights key advancements in AV technology, such as adaptive interfaces and assistive features. That can enhance the driving experience for the elderly. Additionally, it investigates factors influencing trust and acceptance of AVs among older adults, emphasizing the importance of transparent and user-friendly design. Although, the despite notable progress has been made, the significant gaps remain in understanding how to optimize these technologies to meet the diverse needs of elderly passengers. The review identifies areas for future research, including personalized AV systems and regulatory frameworks that support designs friendly to the elderly. By addressing these gaps, the study aims to contribute to developing autonomous vehicles that are inclusive and accessible. It will make the mobility and quality of life for the aging population increased. This review underscores the importance of integrating machine learning and vision neural networks in designing AVs that cater to the unique needs of older adults. It was also offering valuable insights for researchers, policymakers, and industry stakeholders advancing autonomous vehicle technology.

Author 1: Shengsheng Tan

Keywords: Autonomous vehicles; machine learning; vision neural network; human-computer interaction; aging population; artificial intelligence

PDF

Paper 31: Machine Learning for Predicting Intradialytic Hypotension: A Survey Review

Abstract: Intradialytic hypotension (IDH) is a common complication in patients undergoing maintenance hemodialysis and is associated with an increased risk of cardiovascular and all-cause mortality. Machine learning (ML) and deep learning (DL) techniques transform healthcare by enabling accurate disease diagnosis, personalised treatment plans, and clinical decision support. However, challenges like data quality, privacy, and interpretability must be addressed for responsible adoption. This survey review aims to summarise and analyse relevant articles on applying machine learning models for predicting IDH. Among these models, deep learning, a subfield of machine learning, stands out because it can improve the overall performance of health care, particularly in diagnostic imaging and pathologic processes and in the synthetic judgment of big data flow. The insights gained from this survey review will assist researchers and practitioners in selecting appropriate machine-learning models and implementing preemptive measures to prevent IDH in dialysis patients.

Author 1: Saeed Alqahtani
Author 2: Suhuai Luo
Author 3: Mashhour Alanazi
Author 4: Kamran Shaukat
Author 5: Mohammed G Alsubaie
Author 6: Mohammad Amer

Keywords: Hemodialysis; machine learning; deep learning; artificial intelligence; intradialytic hypotension; electrocardiogram; light gradient boosting machine; deep neural network; recurrent neural network

PDF

Paper 32: An Interactive Attention-Based Approach to Document-Level Relationship Extraction

Abstract: Document-level relation extraction entails sifting through extensive document data to pinpoint relationships and pertinent event details among various entities. This process aids intelligence analysts in swiftly grasping the essence of the content while revealing potential connections and emerging trends, thus proving invaluable for research purposes. This paper puts forward a method for document-level relation extraction that leverages an interaction attention mechanism. Initially, building on an evidence-based approach for extracting relations at the document level, the interaction attention mechanism is introduced, extracting the final layer of hidden states containing rich semantic information from the document encoder. Subsequently, these concealed states are fed into a self-attention layer informed by dependency parsing. The outputs from both elements serve as distinct supervisory signals for the interactive input. By pooling these output results, it can derive context embeddings that possess enhanced representational power. Preliminarily, relation triples are extracted using the relation classifier. In conclusion, building on the preliminary relationship results, the process of relationship inference is carried out independently using pseudo-documents created from the source material and pertinent evidence. Only those relationships with a cumulative inference score that surpasses a certain threshold are regarded as the final outcomes. Experimental findings from the publicly accessible datasets indicate commendable performance.

Author 1: Zhang Mei
Author 2: Zhao Zhongyuan
Author 3: Xu Zhitong

Keywords: Document-level relation extraction; interaction attention-based; the baseline model

PDF

Paper 33: Computer Modeling of the Stress-Strain State of Two Kvershlags with a Double Periodic System of Slits Weighty Elastic Transtropic Massif

Abstract: This paper presents computer modeling of the stress-strain state of two kvershlags with a double periodic system of slits weighty elastic transtropic massif. It introduced key concepts such as 'kvershlag', a term used to describe perpendicular cavities in a layered massif, and 'weighted elastic transtropic massif', which refers to a specialized geological structure considered in the study. These terms are critical for understanding the modeling approach. Due to the complexity of the analytical solution of this class of problems, a numerical method is used. Such a mixed problem is provided to obtain a solution by bringing it to an equivalent environment. To solve such a mixed problem, it is offered to get a solution by bringing it to an equivalent climate in terms of stiffness. The finite element method was used to solve the problem. A software package has been created to solve the stress-strain state of the two kvershlags. To ensure the correctness of the software complex, it was checked using test tasks. To study the stress-strain state of kvershlags in a weighted massif. The basic systems of equations are obtained. Algorithms are constructed and the program complex FEM_3D for solving finite element method problems is compiled. Mixed problems of the stress-strain state of cavities are solved approximately. The results of complex computer calculations are systematized, analyzed, specific conclusions are drawn and recommendations for their practical application are proposed. A computer simulated the stress-strain state of two kvershlags. The numerical solution to the given problem was obtained using the software. Results demonstrate that our numerical method approach results in 0.01%.

Author 1: Tursinbay Turymbetov
Author 2: Gulmira Tugelbaeva
Author 3: Baqlan Kojahmet
Author 4: Bekzat Kuatbekov
Author 5: Serzhan Maulenov
Author 6: Bakhytzhan Turymbetov
Author 7: Mukhamejan Abdibek

Keywords: Transtropic; cavities; stress-strain state; deformation; finite element; slits

PDF

Paper 34: Smart System for Driver Behavior Prediction

Abstract: Driver behavior has recently emerged as a challenging topic in Traffic risk studies. Despite the advances in this topic, the challenges still remain. In fact, the current contribution deals with predicting at Real-time driver behavior based on machine learning techniques handling data sensing collected from smartphone sensors (accelerometer, gyroscope, GPS) and from OBD II. To ensure prediction at real time, we used a real-time architecture utilizing Atlas MongoDB service to synchronize data communication. Furthermore, we opt Random Forest model that demonstrates the highest performance compared to other models. This model has the advantage of predicting and preventing by warning a driver if his or her driving style is aggressive, moderate or slow. The proposed system aims to give more information about incidents to gain a better understanding of their causes.

Author 1: Hajar LAZAR
Author 2: Zahi JARIR

Keywords: Driver behavior prediction; OBD II; smartphone sensors; intelligent transport system; traffic safety

PDF

Paper 35: Development of Intelligent Learning Model Based on Ant Colony Optimization Algorithm

Abstract: In the process of the gradual popularization of online courses, learners are increasingly dissatisfied with the recommendation mechanism of imprecise courses when faced with a large number of course choices. How to better recommend relevant courses to targeted users has become a current research hotspot. An intelligent learning model based on ant colony optimization algorithm is introduced, which can accurately calculate the similarity between courses and learners. After structured classification, the model recommends courses to learners in the optimal way. The results showed that the accuracy of this method reached 10-20 when tested in Sphere and Ellipse functions, and the optimal solution for problem Ulysses21 was 27, which was better than Advanced Sorting Ant System (ASrank), Maximum Minimum Ant System (MMAS), and Ant System (AS) based on optimization sorting. The proposed ant colony optimization algorithm had better convergence performance than ASrank, MMAS, and AS algorithms, with a shortest path of 53.5. After reaching Root Mean Square Error (RMSE) and Relative Deviation (RD) distributions of 6% and 8%, the stability of the proposed method no longer decreased with increasing RMSE. The accuracy did not vary significantly with changes in the dataset, and the reproducibility performance was better than other comparison models. In the scenarios of path Block and path Naive, the proposed algorithm had an average computation time of only 1011, which was better than the Ant Colony Optimization (ACO) and Massive Multilingual Speech (MMS) models. Therefore, the proposed algorithm improves the performance of intelligent learning models, solves the problem of local optima while enhancing the convergence efficiency of the model, and provides new solutions and directions for increasing the recommendation performance of online learning platforms.

Author 1: Xiaojing Guo
Author 2: Xiaoying Zhu
Author 3: Lei Liu

Keywords: Online courses; ant colony optimization algorithm; intelligent learning model; path planning; local optimum

PDF

Paper 36: Automated Detection of Malevolent Domains in Cyberspace Using Natural Language Processing and Machine Learning

Abstract: Cyberattacks are intentional attacks on computer systems, networks, and devices. Malware, phishing, drive-by downloads, and injection are popular cyberattacks that can harm individuals, businesses, and organizations. Most of these attacks trick internet users by using malicious links or webpages. Malicious webpages can be used to distribute malware, steal personal information, conduct phishing attacks, or perform other malicious activities. Detecting such malicious websites is a tedious task for internet users. Therefore, locating such a website in cyberspace requires an automated detection tool. Currently, machine learning techniques are being used to detect such malicious websites. The majority of recent studies derive a limited number of features from webpages (both benign and malicious) and use machine learning (ML) algorithms to detect fraudulent webpages. However, these constrained capabilities might not use the full potential of the dataset. This study addresses this issue by identifying malicious websites using both the URL and webpage content features. To maximize detection accuracy, both ngrams and vectorization methods in natural language processing are adopted with minimum feature-set. To exploit the full potential of the dataset, the proposed approach derives the 22 common linguistic features of the URL and generates ngrams from the domain name of the URL. The textual content of the webpages was also used. The research employs seven machine learning algorithms with three vectorization methods. The outcome reveals that the proposed method outperformed the results of previous studies.

Author 1: Saleem Raja Abdul Samad
Author 2: Pradeepa Ganesan
Author 3: Amna Salim Al-Kaabi
Author 4: Justin Rajasekaran
Author 5: Singaravelan M
Author 6: Peerbasha Shebbeer Basha

Keywords: Machine learning; N-gram; linguistic features; natural language processing (NLP); malicious webpage

PDF

Paper 37: Enhanced IVIFN-ExpTODIM-MABAC Technique for Multi-Attribute Group Decision-Making Under Interval-Valued Intuitionistic Fuzzy Sets

Abstract: The evaluation of English teaching quality is crucial for enhancing teaching effectiveness. It helps teachers understand their teaching methods and students' learning outcomes through systematic assessment, thereby guiding teachers to adjust their teaching strategies. Additionally, the results of the evaluation provide decision-making support for educational management at schools, optimizing curriculum design and resource allocation. Regular evaluations of teaching quality motivate teachers for continuous professional development, improve teaching standards, and ensure that students achieve maximum growth and progress in their English learning journey. The assessment of college English teaching quality employs multi-attribute group decision-making (MAGDM). Techniques like Exponential TODIM (ExpTODIM) and MABAC are utilized to facilitate MAGDM. During the evaluation process, interval-valued intuitionistic fuzzy sets (IVIFSs) are utilized to handle fuzzy data. This research introduces a novel method, the interval-valued intuitionistic fuzzy number ExpTODIM-MABAC (IVIFN-ExpTODIM-MABAC) tailored for MAGDM under the framework of IVIFSs. To demonstrate its efficacy, a numerical example evaluating college English teaching quality is presented. Key contributions of this study include: (1) Extending the ExpTODIM-MABAC method to include IVIFSs with an Entropy model; (2) Utilizing Entropy to ascertain weights within IVIFSs; (3) Proposing the IVIFN-ExpTODIM-MABAC approach for MAGDM under IVIFSs; (4) Validating the approach with a numerical example and various comparative analyses of college English teaching quality.

Author 1: Bin Xie

Keywords: Multi-attribute group decision-making (MAGDM); interval-valued intuitionistic fuzzy sets (IVIFSs); ExpTODIM approach; MABAC approach; college English teaching quality evaluation

PDF

Paper 38: CoCoSo Framework for Management Performance Evaluation of Teaching Services in Sports Colleges and Universities with Euclidean Distance and Logarithmic Distance

Abstract: Sports colleges are the highest educational level in China's higher education system to cultivate sports professionals. They shoulder the arduous task of cultivating sports talents with innovative spirit and practical ability, and contribute to the country's sports and education undertakings. Through the study of the service performance of the teaching management departments in sports colleges, it is beneficial for teaching management workers to establish the central position of teaching work more firmly in their thoughts and actions, transform their work style, enhance their ideological awareness of serving teaching work, serve teaching, teachers, and students, closely focus on teaching work to provide various services, strive to improve the level of management services, and make improving service levels and optimizing service quality an important part of improving the teaching management level in sports colleges. The management performance evaluation of teaching services in sports colleges and universities is regarded as the defined multiple-attribute decision-making (MADM). Recently, the CoCoSo and entropy technique was utilized to cope with MADM. The double-valued neutrosophic sets (DVNSs) are utilized as a technique for characterizing fuzzy information during the management performance evaluation of teaching services in sports colleges and universities. In this study, double-valued neutrosophic number CoCoSo (DVNN-CoCoSo) technique is administrated for MADM in light with DVNN Euclidean distance (DVNNED) and DVNN Logarithmic distance (DVNNLD). Finally, numerical example for management performance evaluation of teaching services in sports colleges and universities is put forward to show the DVNN-CoCoSo technique. The major contribution of this study is administrated: (1) DVNN-CoCoSo technique is administrated for MADM in light with DVNNED and DVNNLD; (2) The objective weights are considered through entropy technique; (3) numerical example for management performance evaluation of teaching services in sports colleges and universities and some comparative analysis are administrated to verify the DVNN-CoCoSo technique.

Author 1: Feng Li
Author 2: Yuefei Wen

Keywords: Multiple-Attribute Decision-Making (MADM); Double-Valued Neutrosophic Sets (DVNSs); CoCoSo technique; management performance evaluation of teaching services

PDF

Paper 39: Enhanced Methodology for Production-Education Integration and Quality Evaluation of Rural Vocational Education Under Rural Revitalization with 2-Tuple Linguistic Neutrosophic Numbers

Abstract: Rural vocational education (RVE) plays a crucial role in nurturing practical talents for the development of rural economy and society in the new era, as well as cultivating future generations of agricultural successors. Quality evaluation serves as an essential means of ensuring educational excellence. It acts as a key element for supervision, assurance, and enhancement of educational quality. For the production-education integration (PEI) in RVE, establishing a quality evaluation system that aligns with national and rural conditions, caters to the needs of modern agricultural industry development, and reflects the characteristics of RVE is crucial. Such a system plays a vital role in leading and promoting the deep integration of industry and education in rural vocational colleges. The PEI quality evaluation in RVE under rural revitalization involves MAGDM. Currently, Exponential TODIM (ExpTODIM) approach and grey relational analysis (GRA) approach has been utilized to address MAGDM challenges. To handle uncertain information in PEI quality evaluation in RVE under rural revitalization, 2-tuple linguistic neutrosophic sets (2TLNSs) are conducted as a valuable tool. This paper introduces the implementation of the 2-tuple linguistic neutrosophic number Exponential TODIM-GRA (2TLNN-ExpTODIM-GRA) approach to effectively manage MAGDM problems using 2TLNSs. Additionally, a numerical study is conducted to validate the application of this approach for PEI quality evaluation in RVE under rural revitalization.

Author 1: Xingli Wang

Keywords: Multiple-attribute group decision-making (MAGDM); 2TLNSs; ExpTODIM approach; GRA approach; PEI quality evaluation

PDF

Paper 40: A Deep Learning Based Detection Method for Insulator Defects in High Voltage Transmission Lines

Abstract: The high-voltage transmission system is a key component of the power network, and the reliability of its insulators directly affects the safe operation of the system. Traditional insulator defect detection methods are reliant on manual inspection, which requires significant human resources and is prone to substantial subjectivity. To address this issue, this paper proposes an insulator defect recognition method based on the improved YOLOv5 algorithm. This method first collects images of insulator defects and then utilizes the YOLOv5 model for recognition training. To enhance multi-scale feature fusion capability, a bidirectional feature pyramid network (BiFPN) is introduced. During the training process, the SiUL function is used, and the SE attention mechanism has been integrated into the detection backbone network, which enhances the model's detection accuracy. Experimental results show that the model achieves a detection precision of 90.27%, a recall of 89.14%, and a mAP of 91.34% on the test set. To further enhance the model's practicality, a PyQt5-based user interface (GUI) for the inspection system is designed, enabling interactive functions such as image uploading, defect detection, and result display. In summary, the research presented in this paper provides efficient and accurate technical support for intelligent power inspection, offering a wide range of application prospects.

Author 1: Wang Tingyu
Author 2: Sun Xia
Author 3: Liu Jiaxing
Author 4: Zhang Yue

Keywords: Insulators; insulator defect detection; improved YOLOv5; BiFPN network; PyQt5

PDF

Paper 41: A Review of Personalized Recommender System for Mental Health Interventions

Abstract: Personalized recommender systems for mental health are becoming indispensable instruments for providing individuals with individualized resources and therapeutic interventions. This study aims to explore the application of recommender systems within the mental health domain through a systematic literature review. The research is guided by three primary questions: 1) What is a recommender system, and what techniques are available within these systems? 2) What techniques and approaches are used explicitly in recommender systems for mental health applications? 3) What are the limitations and challenges in applying recommender systems in the mental health domain? The first step in answering these questions is to give a thorough introduction to recommender systems, covering all the different methods, including content-based filtering, collaborative filtering, knowledge-based filtering, and hybrid approaches. Next, examine the specific techniques and approaches employed in the mental health context, highlighting their unique requirements for adaptation, benefits, and limitations. Ultimately, the research highlights the key limitations and challenges, including data privacy concerns, the need for tailored recommendations, and the complexities of user engagement in mental health environments. By synthesizing current knowledge, this review provides valuable insights into the potential and constraints of recommender systems in supporting mental health, offering guidance for future research and development in this critical area.

Author 1: Idayati Binti Mazlan
Author 2: Noraswaliza Abdullah
Author 3: Norashikin Ahmad
Author 4: Siti Zaleha Harun

Keywords: Recommender system; collaborative filtering; content-based filtering; hybrid recommender system; mental health

PDF

Paper 42: Intelligent Service Book Sorting in University Libraries Based on Linear Discriminant Analysis Method

Abstract: The demand for intelligent services in university libraries is constantly increasing, especially in the intelligent book sorting work. The research aims to explore an intelligent classification method for university library books based on linear discriminant analysis. It is used to reduce the dimensionality of feature multidimensional data. A membership model for different categories of books is established to achieve classification. The results showed that when the training set data was reduced to two-dimensional, the feature extraction accuracy of the classification algorithm reached 64.02%, which was significantly higher than 52.48% of one-dimensional data. In addition, the membership calculation accuracy of axiomatic fuzzy sets on two-dimensional data was high, reducing the classification difficulties caused by mixed samples. After comparing and analyzing different algorithms, the proposed transfer learning linear-discriminant analysis-axiomatic fuzzy set algorithm achieved the highest accuracy of 98.67% and completed data classification in about 20s, which was superior to other commonly used classification algorithms. The practical significance of the research lies in providing an efficient and accurate book sorting algorithm, which helps to improve the work efficiency and service quality of libraries.

Author 1: Changjun Wang
Author 2: Fengxia You
Author 3: Yu Wang

Keywords: Linear discrimination; library intelligent services; book sorting; university libraries

PDF

Paper 43: Prediction of Booking Trends and Customer Demand in the Tourism and Hospitality Sector Using AI-Based Models

Abstract: Accurate demand forecasting is critical for optimizing operations in the tourism and hospitality sectors. This paper proposes a robust multi-algorithmic framework leveraging four advanced models of Artificial Intelligence (LSTM, Random Forest, XGBoost, and Prophet) to predict booking trends and customer demand. In contrast to traditional approaches, this study incorporates external factors such as competitors' pricing strategies, local events, and weather patterns, offering a more holistic view of demand drivers. Using a comprehensive dataset from a leading hotel chain, we systematically compare the performance of these models, providing detailed evaluations. The findings offer actionable insights for hotel managers, demonstrating how predictive analytics can inform revenue management, improve operational efficiency, and enhance marketing initiatives. These results contribute to the evolving field of demand forecasting, offering practical recommendations for data-driven decision-making in the tourism and the hospitality sector.

Author 1: Siham Rekiek
Author 2: Hakim Jebari
Author 3: Kamal Reklaoui

Keywords: Artificial Intelligence; decision-making; long short-term memory; XGBoost; Random Forest; Prophet; tourism; hospitality; demand forecasting; booking trends; customer

PDF

Paper 44: Optimizing Production in Reconfigurable Manufacturing Systems with Artificial Intelligence and Petri Nets

Abstract: This article presents an advanced approach to optimize production in Reconfigurable Manufacturing Systems (RMFS) by integrating Petri Nets with artificial intelligence (AI) techniques, particularly a genetic algorithm (GA). The proposed methodology aims to enhance scheduling efficiency and adaptability in dynamic manufacturing environments. Quantitative analysis demonstrates significant improvements, with the approach achieving an 85% success rate in reducing lead times and improving resource utilization, outperforming traditional scheduling methods by a margin of 15%. Furthermore, our AI-driven system exhibits a 90% success rate in providing data-driven insights, leading to more informed decision-making processes compared to existing neural network optimization techniques. The scalability of the proposed method is evidenced by its consistent performance across various RMS configurations, achieving an 80% success rate in optimizing scheduling decisions. This study not only validates the robustness of the proposed method through extensive benchmarking but also highlights its potential for widespread adoption in real-world manufacturing scenarios. The findings contribute to the advancement of intelligent manufacturing by offering a novel, efficient, and adaptable solution for complex scheduling challenges in RMFS.

Author 1: Salah Hammedi
Author 2: Jalloul Elmelliani
Author 3: Lotfi Nabli
Author 4: Abdallah Namoun
Author 5: Meshari Huwaytim Alanazi
Author 6: Nasser Aljohani
Author 7: Mohamed Shili
Author 8: Sami Alshmrany

Keywords: Artificial Intelligence (AI); Genetic Algorithms (GAs); optimization; intelligent scheduling; Petri Nets; Reconfigurable Manufacturing Systems (RMFS); scheduling

PDF

Paper 45: TSO Algorithm and DBN-Based Comprehensive Evaluation System for University Physical Education

Abstract: With the rise of fitness technologies and the integration of smart applications in education, improving physical education evaluation methods is essential for better assessing student performance inside and outside the classroom. Traditional evaluation methods often lack precision, fairness, and real-time capabilities. This study aims to develop an integrated evaluation method for university physical education using a combination of the Tuna Swarm Optimization (TSO) algorithm and a Deep Belief Network (DBN) to optimize the accuracy and efficiency of evaluating both in-class and extracurricular physical activities. The evaluation system is built using the Campus Running APP, which tracks and analyzes student performance in various physical education aspects, including in-class participation, extracurricular activities, and fitness tests. The TSO algorithm is employed to optimize the DBN, improving its ability to process complex datasets and avoid local optima. The model is trained and tested on a dataset collected from student activity on the Campus Running APP. Experimental results show that the TSO-DBN model outperforms traditional methods, such as DBN, GWO-DBN, and FTTA-DBN, in terms of evaluation accuracy and processing time. The TSO-DBN model achieves a root mean square error (RMSE) of 0.2-0.3, significantly lower than the comparison models. Additionally, it reaches an R² value of 0.98, indicating high prediction accuracy, and demonstrates the fastest evaluation time of 0.0025 seconds. These results underscore the model’s superior ability to provide accurate, real-time assessments. The integration of the TSO algorithm with the DBN significantly improves the precision, efficiency, and fairness of physical education evaluations. The model offers a comprehensive and objective system for assessing student performance, helping universities better monitor and promote student health and physical activity. This approach paves the way for future research and application of AI-based systems in educational environments.

Author 1: Yonghua Yang

Keywords: Campus fun run app; integrated evaluation of university sports inside and outside the classroom; tuna swarm optimization algorithm; deep confidence network

PDF

Paper 46: Robust Image Tampering Detection and Ownership Authentication Using Zero-Watermarking and Siamese Neural Networks

Abstract: The development of advanced image editing tools has significantly increased the manipulation of digital images, creating a pressing need for robust tamper detection and ownership authentication systems. This paper presents a method that combines zero-watermarking with Siamese neural networks to detect image tampering and verify ownership. The approach utilizes features from the Discrete Wavelet Transform (DWT) and employs two halftone images as watermarks: one representing the owner's portrait and the other corresponding to the protected image. A feature matrix is generated from the owner's portrait using the Siamese network and securely linked to the image's halftone watermark through an XOR operation. Additionally, data augmentation enhances the model's robustness, ensuring effective learning of image features even under geometric and signal processing distortions. Experimental results demonstrate high accuracy in recovering halftone images, enabling precise tamper detection and ownership verification across different datasets and image distortions (geometric and image processing distortions).

Author 1: Rodrigo Eduardo Arevalo-Ancona
Author 2: Manuel Cedillo-Hernandez
Author 3: Francisco Javier Garcia-Ugalde

Keywords: Zero-watermarking; tampering detection; ownership authentication; neural network

PDF

Paper 47: Eating Behavior and Level of Knowledge About Healthy Eating Among Gym Users: A Multinomial Logistic Regression Study

Abstract: The World Health Organization indicates that unhealthy diets cause approximately 11 million deaths annually worldwide. In Peru, 57.9% of the population consumes highly processed foods daily. The objective of this study is to analyze the relationship between knowledge about healthy eating and eating behavior among gym users in a district of Lima, Peru. Using an exploratory and quantitative design, information was collected from 156 users through a hybrid questionnaire, analyzed with SPSS and multinomial logistic regression techniques. The results reveal that 57.42% of the participants have an intermediate knowledge of healthy eating, while only 17.42% reach a high level. Likewise, 49.03% exhibit an intermediate eating behavior. In addition, sociodemographic factors, such as the duration of gym attendance and maintenance of a specific diet, were found to influence eating behavior. It is concluded that there is a significant relationship between the level of knowledge and eating behavior, underlining the importance of nutrition education to improve eating habits in this population.

Author 1: Ana Huamani-Huaracca
Author 2: Sebastián Ramos-Cosi
Author 3: Michael Cieza-Terrones
Author 4: Gina León-Untiveros
Author 5: Alicia Alva Mantari

Keywords: Knowledge; healthy eating; gym; eating behavior

PDF

Paper 48: Development of a Causal Model of Post-Millennials' Willingness to Disclose Information to Online Fashion Businesses (Thailand)

Abstract: This research examines the causal factors influencing the willingness of Central Post-Millennials to disclose information to online fashion businesses by using privacy calculus theory as the basic principle for modeling. The study has three primary objectives: (1) to investigate the causal factors influencing willingness to disclose information, (2) to analyze both the direct and indirect effects of perceived risk, perceived benefit, perceived value, perceived control over the use of personalization data, and trust on the willingness to disclose information, and (3) to develop a causal factor model for understanding the determinants of willingness to disclose information among Central Post-Millennials in the context of online fashion businesses. The research sample consists of 385 individuals, and data were collected using a structured questionnaire. Descriptive and inferential statistical methods were employed for data analysis. The relationships between variables were assessed using Pearson's Correlation Coefficient. The model's fit to the empirical data was evaluated using goodness-of-fit measures, and the transmission of influence was tested through structural equation modeling (SEM). The findings reveal that demographic factors do not significantly affect the willingness to disclose information. However, the study identifies perceived risk, perceived benefit, perceived value, perceived control over the use of personalization data, and trust as key determinants of willingness to disclose information to online fashion businesses. Among these, perceived control exhibits the strongest influence, closely followed by trust. These results highlight the antecedent processes influencing the willingness to disclose information, as represented by a model developed from a comprehensive literature review and empirically tested for consistency with the data.

Author 1: Apiwat Krommuang
Author 2: Jinnawat Kasisuwan

Keywords: Online fashion business; Post-Millennials; privacy calculus; willingness; disclose information

PDF

Paper 49: Facial Expression Classification System Using Stacked CNN

Abstract: Automatic emotion recognition technology through facial expressions has broad potential, ranging from human-computer interaction to stress detection and blood pressure assessment. Facial expressions exhibit patterns and characteristics that can be identified and analyzed by image processing and machine learning methods. These methods provide a basis for the development of emotion recognition systems. This research develops a facial emotion recognition model using Convolutional Neural Network (CNN) architecture, a popular architecture in image classification, segmentation, and object detection. CNNs offer automatic feature extraction and complex pattern recognition advantages on image data. This research uses three types of datasets, FER2013, CK+, and IMED, to optimize the deep learning approach. The developed model achieved an overall accuracy of 71% on the three datasets combined, with an average precision, recall, and F1-Score of 71%. The results show that CNN architecture performed well in facial emotion classification, supporting potential practical applications in various fields.

Author 1: Aditya Wikan Mahastama
Author 2: Edwin Mahendra
Author 3: Antonius Rachmat Chrismanto
Author 4: Maria Nila Anggia Rini
Author 5: Andhika Galuh Prabawati

Keywords: FER; CNN; deep learning; image classification

PDF

Paper 50: Optimization of 3D Coverage Layout for Multi-UAV Collaborative Lighting in Emergency Rescue Operations

Abstract: In emergency rescue scenarios, Unmanned Aerial Vehicles (UAVs) play a pivotal role in navigating complex terrains and high-risk environments. This paper proposes an optimization model for the three-dimensional coverage layout of a multi-UAV collaborative lighting system, specifically designed to meet the spatial requirements of emergency operations. An enhanced Particle Swarm Optimization (PSO) algorithm is employed to tackle the layout challenges, featuring adaptive inertia weights and asymmetric learning factors to improve both efficiency and global search capabilities. The simulation results demonstrate that the proposed method significantly enhances coverage efficiency, achieving over 90% coverage in critical areas while ensuring precise UAV positioning. Additionally, the algorithm shows faster convergence and stronger global search ability, effectively optimizing UAV deployment and improving operational efficiency during rescue missions. This study offers a practical and reliable layout solution for multi-UAV collaborative lighting systems, which is crucial for reducing rescue times, ensuring operational safety, and improving resource allocation in emergency responses.

Author 1: Dan Jiang
Author 2: Rui Yan

Keywords: Unmanned Aerial Vehicles; emergency rescue; collaborative lighting; three-dimensional coverage; particle swarm algorithm

PDF

Paper 51: Analyzing the Impact of Occupancy Patterns on Indoor Air Quality in University Classrooms Using a Real-Time Monitoring System

Abstract: Indoor air quality (IAQ) in universities is of concern because it directly affects students' health and performance. This study presents an IoT-based system for real-time monitoring of IAQ in university classrooms. The system uses MQ-7 and MQ-135 sensors to monitor CO and CO2 pollution parameters. The data is then processed by the ESP32 microcontroller, displayed on the LCD screen, and responded to immediately in the mobile application. The system’s real-time monitoring capabilities, data display, and alert mechanism provide valuable insights to improve the classroom environment. The sensors used in the system achieved an accuracy of 97.17% for five people and 93.96% for ten people's scenarios. This study investigates the relationship between human behavior, classroom activities, and occupancy impacts the IAQ. The results show a strong positive correlation between occupancy rates and CO2 levels, indicating the importance of ventilation in densely populated classrooms. The correlation coefficient between the number of students and the CO2 levels is 0.982. This coefficient is remarkably close to 1, indicating a strong positive correlation. In other words, as the number of students in the classroom increases, the CO2 levels also increase significantly. The high correlation coefficient suggests a direct relationship between the number of students and the CO2 levels. This IoT-based system will facilitate a data-driven approach to improving indoor environmental conditions, supporting healthier and more effective learning environments in educational institutions.

Author 1: Sri Ratna Sulistiyanti
Author 2: Muhamad Komarudin
Author 3: F. X Arinto Setyawan
Author 4: Hery Dian Septama
Author 5: Titin Yulianti
Author 6: M. Farid Ammar

Keywords: Indoor air quality; monitoring; pollution; IoT

PDF

Paper 52: Active Semi-Supervised Clustering Algorithm for Multi-Density Datasets

Abstract: Semi-supervised clustering with pairwise constraints has been a hot topic among researchers and experts. However, the problem becomes quite difficult to manage using random constraints for clustering data when the clusters have different shapes, densities, and sizes. This research proposes an active semi-supervised density-based clustering algorithm, termed "ASS-DBSCAN," designed specifically for clustering multi-density data. By integrating active learning and semi-supervised techniques, ASS-DBSCAN enhances traditional clustering methods, allowing it to handle complex data distributions with varying densities more effectively. This research provides two major contributions. The first contribution of this research is to analyze how to link constraints (including that must be linked and ones that should not be linked) that will be utilized by the clustering algorithm. The second contribution made by this research is the ability to add multiple density levels to the dataset. We perform experiments over real datasets. The ASS-DBSCAN algorithm was evaluated against existing state-of-the-art system for various evaluation metrics in which it performed remarkably well.

Author 1: Walid Atwa
Author 2: Abdulwahab Ali Almazroi
Author 3: Eman A. Aldhahr
Author 4: Nourah Fahad Janbi

Keywords: Semi-supervised clustering; pairwise constraints; multi-density data; active learning

PDF

Paper 53: MSMA: Merged Slime Mould Algorithm for Solving Engineering Design Problems

Abstract: The Slime Mould Algorithm (SMA) has effectively solved various real-world problems such as image segmentation, solar photovoltaic cell parameter estimation, and economic emission dispatch. However, SMA and its variants still face limitations when dealing with low-dimensional optimization problems, including slow convergence and local optima traps. This study aims to develop an optimized algorithm, the Merged Slime Mould Algorithm (MSMA), to overcome these limitations and improve performance in low-dimensional optimization tasks. Additionally, MSMA introduces a novel approach by merging the Adaptive Opposition Slime Mould Algorithm (AOSMA) and the Smart Switching Slime Mould Algorithm (S2SMA), simplifying the hybridization process and enhancing optimization performance. MSMA eliminates the need for multiple initializations, avoids memory-switching requirements, and employs adaptive and smart switching rules to harness the strengths of both algorithms. The performance of MSMA is evaluated using the CEC 2005 benchmark and ten real-world applications. The Wilcoxon rank-sum test verifies the effectiveness of the proposed approach, with results compared to various SMA variations and related optimization methods. Numerical findings demonstrate superior fitness values achieved by the proposed strategy, while statistical results indicate MSMA's outperformance with a rapid convergence curve.

Author 1: Khaled Mohammad Alhashash
Author 2: Hussein Samma
Author 3: Shahrel Azmin Suandi

Keywords: Slime mould algorithm; engineering design problems; metaheuristic; optimization

PDF

Paper 54: Research on Credit Card Fraud Prediction Model Based on GAN-DNN Imbalance Classification Algorithm

Abstract: Credit card consumption has become an important way of consumption in modern life, but the problem of credit card fraud has also emerged, disrupting the financial order and restricting the development of the industry. Aiming at the data class imbalance problem in credit card fraud detection and improving the accuracy of fraud detection, this paper uses the Generative Adversarial Network (GAN) to generate fraud samples and balance the number of fraud transaction samples and normal transaction samples. Then, a deep neural network (DNN) is used to construct a credit card fraud prediction model. The study compares this model with commonly used classification algorithms and sampling methods in detail and confirms that the designed credit card fraud prediction model has a good effect, providing a theoretical basis and practical reference for financial institutions to predict credit card fraud.

Author 1: Qin Wang
Author 2: Mary Jane C.Samonte

Keywords: Generative adversarial network; deep neural network; unbalanced data; credit card fraud; classification algorithms

PDF

Paper 55: Impact Analysis of Informatization Means Driven by Artificial Intelligence Technology on Visual Communication

Abstract: With the popularization of computer technology, the combination of artificial intelligence and image processing technology has become a research hotspot in the visual communication. Image processing technology mostly involves segmentation and detection of images. Image segmentation often focuses on extracting image contour information, while ignoring the color of the image. The calculation time for image detection is relatively long, and the calculation steps are also relatively cumbersome. In response to the above issues, a density peak clustering algorithm was proposed for image segmentation. In the phase of image detection, the region recommendation network is introduced to improve the faster region Convolutional neural network algorithm. The findings demonstrate that under 15% Gaussian noise and 10% Salt-and-pepper noise, the segmentation accuracy of the density peak clustering algorithm is 98.13% and 97.89% respectively. The accuracy, recall and F-measure of the improved fast region Convolutional neural network algorithm are 98.49%, 97.29% and 97.77% respectively. The accuracy and average time consumption in the graphics processor environment are 98.18% and 2.94ms, respectively. In conclusion, the image segmentation algorithm based on density peak clustering algorithm and the improved fast region Convolutional neural network algorithm are robust, which have good segmentation and detection effects.

Author 1: Lei Ni

Keywords: Image segmentation; image detection; density peak clustering algorithm; convolutional neural network; faster region convolutional neural network

PDF

Paper 56: Revolutionizing Rice Leaf Disease Detection: Next-Generation SMOREF-SVM Integrating Spider Monkey Optimization and Advanced Machine Learning Techniques

Abstract: Leaf diseases pose a significant challenge to rice productivity, which is critical as rice is a staple food for over half of the world's population and a major agricultural commodity. These diseases can lead to severe economic losses and jeopardize food security, particularly in regions heavily reliant on rice farming. Traditional detection methods, such as visual inspection and microscopy, are often inadequate for early disease identification, which is crucial for effective management and minimizing yield loss. This presentation introduces SMOREF-SVM, a novel approach that combines Spider Monkey Optimization (SMO) with Random Forest (RF) and Support Vector Machine (SVM) to improve the classification of rice leaf diseases. The innovation of SMOREF-SVM lies in its use of SMO for effective feature optimization, which selects the most relevant features from complex disease patterns, and its dual-classification framework using RF and SVM. Results demonstrate that SMOREF-SVM achieves an average accuracy of 98%, significantly outperforming standard SVM methods, which achieve around 90%. SMOREF-SVM also improves key metrics, including Precision, Recall, and F1 Score, by 5-10% for diseases with fewer samples, reaching Precision of 94%, Recall of 92%, and F1 Score of 93%. Additionally, ROC curve analysis shows an enhanced Area Under the Curve (AUC), approaching 0.98 for more disease classes, compared to 0.85 with traditional methods. This makes SMOREF-SVM a valuable tool for early and accurate disease detection, offering the potential to improve crop productivity and sustainability, addressing the critical challenges of disease management in agriculture.

Author 1: Avip Kurniawan
Author 2: Tri Retnaningsih Soeprobowati
Author 3: Budi Warsito

Keywords: SMOREF-SVM; rice leaf disease; classification; Spider Monkey Optimization (SMO); machine learning; image processing

PDF

Paper 57: Human Dorsal Hand Vein Segmentation Method Based on GR-UNet Model

Abstract: To solve the issue of inaccurate segmentation accuracy of human dorsal hand veins (HDHV), we propose a segmentation method based on the global residual U-Net (GR-Unet) model. Initially, a visual acquisition device for dorsal hand vein imaging was designed utilizing near-infrared technology, resulting in the creation of a dataset comprising 864 images of HDHV. Subsequently, a Bottleneck from the deep residual network-50 (ResNet50) was integrated into the U-Net model to enhance its depth and alleviate the problem of vanishing gradients. Furthermore, a global attention mechanism (GAM) was introduced at the junction to improve the acquisition of global feature information. Additionally, a weighted loss function that combines cross-entropy loss and Dice loss was employed to address the imbalance between positive and negative samples. The experimental results indicate that the GR-Unet model achieved accuracies of 78.82%, 88.03%, 93.92%, and 97.5% in terms of intersection over union, mean intersection over union, mean pixel accuracy, and overall accuracy, respectively.

Author 1: Zhike Zhao
Author 2: Wen Zeng
Author 3: Kunkun Wu
Author 4: Xiaocan Cui

Keywords: Human dorsal hand veins; GR-UNet; near infrared technology; deep residual network-50; global attention mechanism; loss function

PDF

Paper 58: A Proposed Batik Automatic Classification System Based on Ensemble Deep Learning and GLCM Feature Extraction Method

Abstract: Classification of batik images is a challenge in the field of digital image processing, considering the complexity of patterns, colors, and textures of various batik motifs. This study proposes an ensemble method that combines texture feature extraction using Gray Level Co-occurrence Matrix (GLCM) with the Residual Neural Network (ResNet) classification model to improve accuracy in batik image classification. Texture features such as contrast, dissimilarity, entropy, homogeneity, mean, and standard deviation are extracted using GLCM and combined with ResNet to produce a more robust classification model. The experimental results show that the proposed method achieves high performance, namely above 90% for each evaluation metric used, such as accuracy, precision, recall and F-1 Score. The best performance in classifying batik images is obtained by the Standard Deviation feature with accuracy, precision, recall, and F1-score of 95%, 93%, 93%, and 93%, respectively. Furthermore, the application of the ensemble method based on the hard voting approach has proven effective in increasing the accuracy of batik image classification by utilizing a combination of texture features and deep learning models. The proposed method makes a significant contribution to the efforts to preserve batik culture through digitalization and can be implemented for various purposes such as an image-based batik search system.

Author 1: Luluk Elvitaria
Author 2: Ezak Fadzrin Ahmad Shaubari
Author 3: Noor Azah Samsudin
Author 4: Shamsul Kamal Ahmad Khalid
Author 5: Salamun
Author 6: Zul Indra

Keywords: Batik; GLCM; ResNet; ensemble method; hard voting

PDF

Paper 59: A Comprehensive Crucial Review of Re-Purposing DNN-Based Systems: Significance, Challenges, and Future Directions

Abstract: The fourth industrial revolution is marked by the significance of artificial intelligence (AI), particularly the remarkable progress in deep neural networks (DNNs). These networks have become crucial in various areas of daily life because of their remarkable pattern-learning capabilities on massive datasets. However, the incompatibility of these systems makes reutilizing them for efficient data analysis and computation highly intricate and challenging due to their fragmentation, internal structure, and complexity. Training in DNNs, a vital essential activity in model development, is often time-consuming and costly intensive computation. More precisely, reusing the entire model during deployment when only a small portion of its required features will result in excessive overhead. On the other hand, reengineering the model without efficient code review could also pose security risks as the model would inherit its defects and weaknesses. This paper comprehensively reviews DNN-based systems, encompassing cutting-edge frameworks, algorithms, and models for complex data and existent limitations. The study, which results from a thorough examination, analysis, and synthesis of observations from 193 recent scholarly papers, provides a wealth of knowledge on the subject, identifying key issues and future research directions by offering novel guidelines to advance the DNN model’s repurposing and adaptation, especially in finance, healthcare, and autonomous applications. The demonstrated findings, specifically those related to failure and risk challenges of DNN converters, including factors (n=12), symptoms (n1=4, n2=3), and root causes (n1=4, n2=3), will enrich the ML-DNNs community and guide them toward desirable model development and deployment improvement, with significant practical implications for intelligent industries.

Author 1: Yaser M Al-Hamzi
Author 2: Shamsul Bin Sahibuddin

Keywords: DNNs; DNN-based systems; significance and challenges; incompatibility; re-purposing; review

PDF

Paper 60: A Machine Learning Operations (MLOps) Monitoring Model Using BI-LSTM and SARSA Algorithms

Abstract: Machine learning operations (MLOps) achieves faster model development, deliver higher machine learning models quality, and faster deployment cycle. Unfortunately, MLOps is still an uncertain concept with ambiguous research implications. Professionals and academics have focused only on creating machine learning models, rather than using sophisticated machine learning systems in practical situations. Furthermore, the monitoring system must have a comprehensive view over the system interactions. The need for a strong efficient monitoring system increases when it comes to use the multi container services. Therefore, this research provides a new proposed model called Multi Containers Monitoring (MCM) Model, based on multi container service and machine learning approaches which are bidirectional long short-term memory (BI-LSTM) and state-action-reward-state-action (SARSA). The proposed MCM model enables MLOps systems to be scaled and monitored efficiently. The proposed MCM model realizes and interprets the interactions between the containers. The proposed MCM model enhances the performance of the software release and increases the number of software deployments across different types of environments. Moreover, this research proposes four routines for each layer of the proposed MCM model that illustrates how each layer is going to be developed. This research also illustrates how the proposed MCM model achieves improvements ratio in software deployment cycles by using MLOps up to 24.55% and in build duration cycle up to 13%.

Author 1: Zeinab Shoieb Elgamal
Author 2: Laila Elfangary
Author 3: Hanan Fahmy

Keywords: Machine learning; MLOps; monitoring; container; model

PDF

Paper 61: Deep Learning Approach in Complex Sentiment Analysis: A Case Study on Social Problems

Abstract: This scholarly investigation examines the utilization of artificial intelligence (AI) technology in the analysis and resolution of intricate societal challenges in many countries. The originality of this study resides in the employment of deep learning algorithms, particularly Convolutional Neural Network (CNN), to execute sentiment analysis with an elevated degree of complexity. The examination encompasses three principal dimensions of sentiment: Sentiment, Tone, and Object, with the intention of offering profound insights into public perceptions regarding various social challenges. The fundamental sentiment is categorized into three classifications: Positive, Neutral, and Negative. Moreover, the Tone analysis introduces an additional layer of comprehension that encompasses Support, Suggestion, Criticism, Complaint, and Others, thereby delineating a more precise communicative context. The Object dimension is employed to ascertain the target of the sentiment, whether it pertains to an Individual, Organization, Policy, or other entity. This inquiry applied the analysis to several clusters of social issues, including Poverty and Economic Disparity, Health and Wellbeing, Education and Literacy, Violence and Security, as well as Environment and Social Life. The findings are anticipated to aid the government in devising policies that are more effective and responsive to the exigencies of society, through an enhanced understanding of public sentiment.

Author 1: Bambang Nurdewanto
Author 2: Kukuh Yudhistiro
Author 3: Dani Yuniawan
Author 4: Himawan Pramaditya
Author 5: Mochammad Daffa Putra Karyudi
Author 6: Yulia Natasya Farah Diba Arifin
Author 7: Puput Dani Prasetyo Adi

Keywords: Component; sentiment analysis; deep learning; artificial intelligence; social case

PDF

Paper 62: Implementing a Machine Learning-Based Library Information Management System: A CATALYST-Based Framework Integration

Abstract: This research proposes using machine learning as a foundational element for enhancing information retrieval procedures in university libraries. This initiative will enhance students' comprehension of the topic and improve the integration of instructional resources. To determine which method is the most effective, the performance of each methodology is compared. The author utilizes two separate methodologies in machine learning. The efficacy of inventory management in university libraries is enhanced by the use of forecasting algorithms. The implementation of these two algorithms was conducted within the framework of the CATALYST technology platform. This strategy enhances the efficacy of information retrieval for diverse book needs.

Author 1: Chunmei Ma

Keywords: Library management system; book availability prediction; machine learning algorithms; university libraries; information retrieval

PDF

Paper 63: The Impact of the GQM Framework on Software Engineering Exam Outcomes

Abstract: Assessment is crucial in educational systems, particularly in Software Engineering (SE) programs, where fair and effective evaluations drive continuous improvement. The shift to student-centric methodologies has evolved assessment strategies to focus on aligning educational processes with students' developmental needs rather than merely measuring academic outputs. This paper adapts the Goal-Question-Metric (GQM) framework to enhance learning in software engineering education by linking educational goals, learning activities, and assessment methods. This approach specifies expected learning outcomes and integrates mechanisms for continuous improvement, aligning teaching strategies with student performance metrics. A systematic framework for course assessment using the GQM framework is presented, aligning assessment methods with Intended Learning Outcomes (ILOs) and Student Learning Outcomes (SLOs) to ensure data-driven enhancements. To validate this approach, a template was introduced to assess the impact of a tailored GQM approach on the final exam outcomes of a software engineering course at King Abdulaziz University’s Department of Computer Science. A controlled experiment was conducted over two semesters with students from the CPCS 351 course. The control group, in the first semester, completed their finals without applying GQM, while the experimental group in the following semester employed a customized GQM framework. Statistical analyses, including ANOVA and Mann-Whitney U tests, were utilized to compare exam performance between the groups. Results indicated a significant improvement in the exam scores of the experimental group, thereby validating the effectiveness of the GQM framework in boosting academic performance through structured exam preparation and execution.

Author 1: Reem Abdulaziz Alnanih

Keywords: Goal-question-metric (GQM); software engineering; education; learning process; learning outcomes; continuous improvement; statistical analysis

PDF

Paper 64: Critical Success Factors of Microservices Architecture Implementation in the Information System Project

Abstract: Microservice Architecture (MSA) promises enhancements in information systems, including improved performance, scalability, availability, and maintenance. However, challenges during the design, development, and operations phases can hinder successful deployment. This research presents a case study of one of the leading telecommunications companies in Indonesia, which encountered a three-month delay in implementing its microservices architecture (MSA). The study aims to provide actionable insights for the company to enhance its MSA deployment and contribute to academic knowledge by offering a structured approach to evaluating critical success factors (CSFs) in similar contexts. Through a literature review, twenty-one factors were identified and categorized into four groups: (1) Organization, (2) Process, (3) Systems & Tools, and (4) Knowledge, Skills & Behavior. The Analytical Hierarchy Process (AHP) was used to evaluate the priority of each factor based on survey data from project executors and software development practitioners. The findings indicate that the Organization category is the most crucial, with (1) Top Management Support, (2) Clear Vision, and (3) Adequate Resources being the top three CSFs for MSA implementation.

Author 1: Mochamad Gani Amri
Author 2: Teguh Raharjo
Author 3: Anita Nur Fitriani
Author 4: Nurman Rasyid Panusunan Hutasuhut

Keywords: Microservice architecture; software architecture; critical success factors; analytical hierarchy process

PDF

Paper 65: Application Citespace Visualization Tool to Online Public Opinion Group Label: Generation, Dissemination and Trends

Abstract: With the popularization of mobile Internet technology, the social and cultural environment provides a favorable communication environment for online news dissemination, leading to a highly ubiquitous phenomenon of group labeling communication events. This study explores the generation, dissemination, and evolution trends of group labeling in the online public opinion environment. This study crawled 20975 initial literature data included in the core database of Web of Science, obtained 9834 valid literature after several rounds of screening, and utilized CiteSpace 6.3 software to do metric data analysis and word frequency analysis on the above valid literature data and successively adopted the Analysis means of literature being co-cited, author co-citation, journal co-citation, keyword co-citation, and clustering to analyze group labeling. We have successively used the Analysis of literature co-citation, author co-citation, journal co-citation, keyword co-citation, clustering, etc., and disassembled the group labeling into the generation environment, dissemination process, and trend evolution. The study utilizes the disciplinary perspective of journalism and communication and social media platforms to assist it. Thus, it summarizes and reveals the interaction of group labels in cyberspace and the natural world and seeks to grasp the communication mechanism of the process of insight into this emerging discursive power.

Author 1: Jingyi Ju

Keywords: Group labeling; online public opinion; labeled communication; communication mechanisms; visualization

PDF

Paper 66: Individual Cow Identification Using Non-Fixed Point-of-View Images and Deep Learning

Abstract: Monitoring and traceability are crucial for ensuring efficient and financially beneficial cattle breeding in contemporary animal husbandry. While most farmers rely mainly on ear tags, the development of computer vision and machine learning methods opened many new noninvasive opportunities for the identification, localization, and behavior recognition of cows. In this paper, a series of experimental analyses are presented aimed at investigating the possibility of identification of cows using non-fixed point-of-view images and deep learning. 14 objects were chosen and a photo session was made for each one, which provides training/validation images with different viewing angles of the animals. Next, a darknet-53-based convolutional neural network (CNN) was trained using YOLOv3, capable of identifying the investigated objects. The optimal model achieved 92.2% accuracy when photos of single or grouped non-overlapping animals were used. On the other hand, the trained CNN showed poor performance with group images, containing overlapping cows. The obtained results showed that cows could be reliably recognized using non-fixed point-of-view images, which is the main novelty of this study; however, certain limitations exist in the usage scenarios.

Author 1: Yordan Kalmukov
Author 2: Boris Evstatiev
Author 3: Seher Kadirova

Keywords: Cow identification; convolutional neural network; YOLOv3; non-fixed point-of-view

PDF

Paper 67: A Modified Lightweight DeepSORT Variant for Vehicle Tracking

Abstract: Object tracking plays a pivotal role in Intelligent Transportation Systems (ITS), enabling applications such as traffic monitoring, congestion management, and enhancing road safety in urban environments. However, existing object tracking algorithms like DeepSORT are computationally intensive, which hinders their deployment on resource-constrained edge devices essential for distributed ITS solutions. Urban mobility challenges necessitate efficient and accurate vehicle tracking to ensure smooth traffic flow and reduce accidents. In this paper, we present a modified lightweight variant of the DeepSORT algorithm tailored for vehicle tracking in traffic surveillance systems. By leveraging multi-dimensional features extracted directly from YOLOv5 detections, our approach eliminates the need for an additional convolutional neural network (CNN) descriptor and reduces computational overhead. Experiments on real-world traffic surveillance data demonstrate that our method reduces tracking time to 25.29% of that required by DeepSORT, with only a minimal increase over the simpler SORT algorithm. Additionally, it maintains low error rates between 0.43% and 1.69% in challenging urban scenarios. Our lightweight solution facilitates efficient and accurate vehicle tracking on edge devices, contributing to more effective ITS deployments and improved road safety.

Author 1: Ayoub El-alami
Author 2: Younes Nadir
Author 3: Khalifa Mansouri

Keywords: Distributed systems; intelligent transportation systems; edge computing; object tracking

PDF

Paper 68: Multi-Site Cross Calibration on the LAPAN-A3/IPB Satellite Multispectral Camera with One-Dimensional Kalman Filter Optimization

Abstract: Multispectral cameras on remote sensing satellites must have good radiometric quality due to their wide range of applications. One type of radiometric calibration that can be performed while the satellite is in orbit is cross-calibration. This research focuses on cross-calibration because it has advantages, including being cost-effective and capable of frequent execution. We proposed a multi-site cross-calibration method with two reference cameras using six calibration sites in 2023. The LISA LAPAN-A3 (LA3) camera serves as the target camera, while the OLI LANDSAT-8 (OL8) and MSI SENTINEL-2 (MS2) cameras act as the reference cameras. The calibration process results in numerous calibration coefficients for each channel, thus requiring optimization to produce a single calibration coefficient. The optimization process uses a one-dimensional Kalman filter to reduce measurement noise. The results show that the one-dimensional Kalman filter can reduce noise in the calibration coefficient data, making LA3 radiance values closer to the reference radiance values. Additionally, this study demonstrates that LA3 calibration results with MS2 as the reference camera are better than those with OL8 as the reference.

Author 1: Sartika Salaswati
Author 2: Adhi Harmoko Saputro
Author 3: Wahyudi Hasbi
Author 4: Deddy El Amin
Author 5: Patria Rachman Hakim
Author 6: Silmie Vidiya Fani
Author 7: Agung Wahyudiono
Author 8: Ega Asti Anggari

Keywords: Cross-calibration; multi sites; multispectral camera; LAPAN-A3/IPB Satellite; LISA LAPAN-A3 (LA3); OLI LANDSAT-8 (OL8); MSI/SENTINEL-2 (MS2); one-dimensional Kalman filter

PDF

Paper 69: Combining BERT and CNN for Sentiment Analysis A Case Study on COVID-19

Abstract: This research focuses on sentiment analysis to understand public opinion on various topics, with an emphasis on COVID-19 discussions on Twitter. By utilizing state-of-the-art Machine Learning (ML) and Natural Language Processing (NLP) techniques, the study analyzes sentiment data to provide valuable insights. The process begins with data preparation, involving text cleaning and length filtering to optimize the dataset for analysis. Two models are employed: a Bidirectional Encoder Representations from Transformers (BERT)-based Deep Learning (DL) model and a Convolutional Neural Network (CNN). The BERT model leverages transfer learning, demonstrating strong performance in sentiment classification, while the CNN model excels at extracting contextual features from the input text. To further enhance accuracy, an ensemble model integrates predictions from both approaches. The study emphasizes the ensemble technique’s value for more precise sentiment analysis. Evaluation metrics, including accuracy, classification reports, and confusion matrices, validate the effectiveness of the proposed models and the ensemble approach. This research contributes to the growing field of social media sentiment analysis, particularly during global health crises like COVID-19, and underscores its potential to aid informed decision-making based on public sentiment.

Author 1: Gunjan Kumar
Author 2: Renuka Agrawal
Author 3: Kanhaiya Sharma
Author 4: Pravin Ramesh Gundalwar
Author 5: Aqsa kazi
Author 6: Pratyush Agrawal
Author 7: Manjusha Tomar
Author 8: Shailaja Salagrama

Keywords: Sentiment analysis; COVID-19; BERT; CNN; ensemble model; NLP; transfer learning

PDF

Paper 70: Visual Translation of Auspicious Beliefs in Quanzhou Xi Culture from the Perspective of Man-Machine Collaboration

Abstract: The “Xi” concept in the inheritance of auspicious culture covers the abundance of spiritual and material life, its symbolism is gorgeous and timeless and has lasted for thousands of years. Objective: This study investigates the Quanzhou “happiness” culture, which embodies “reverence for virtue and auspicious beliefs,” exploring its visual symbolization, graphical derivation, redesign, and innovative cultural expressions. Methods: Utilizing literature analysis, field research, and a combination of shape grammar and artificial intelligence, this study dissects and evolves the visual symbols of Quanzhou Xi culture to achieve innovative design through human-machine collaboration. Results: The study deeply refines representative visual symbols of Quanzhou happiness culture, including the “卍” character from Quanzhou embroidery, the Eight Immortals color for wedding happiness, and the longevity turtle cake stamp for longevity happiness. It analyzes and demonstrates the innovative practice of these visual symbols, establishes a folklore perspective, and transitions the happiness culture into a modern fashion context. Conclusion: The research constructs a visual symbol folklore perspective of Quanzhou Xi culture, providing a systematic theoretical foundation and innovative practice paths for promoting and inheriting Xi culture in the modern design field. It promotes Quanzhou Xi culture’s innovative application and fashion transformation in contemporary design.

Author 1: Li Zheng
Author 2: Xu Zhang
Author 3: Huiling Guo

Keywords: Quanzhou Xi culture; symbol visualisation; shape grammar; artificial intelligence; man-machine collaborative design

PDF

Paper 71: Strength Calculation Method of Agricultural Machinery Structure Using Finite Element Analysis

Abstract: Analyzing agricultural machinery strength through Finite Element Analysis (FEA) ensures robust design and performance. This method evaluates structural integrity, enhancing reliability and efficiency in agricultural operations. This paper presents a comprehensive finite element method (FEM) analysis focused on assessing the structural strength of a 3-point cultivator outfitted with seven tynes. Cultivators hold pivotal significance in soil preparation, a foundational aspect of agricultural operations. The principal aim of this analysis is to pinpoint potential failure zones within the cultivator tynes under diverse loading conditions, particularly across varying speeds in medium clay and sandy soil. Anecdotal evidence suggests that domestically manufactured cultivators often exhibit structural deficiencies leading to failures at multiple junctures after just one season of operation. To address this challenge, we constructed a detailed CAD model of the time using Siemens NX software. Subsequent FEM analysis, conducted via ANSYS software, facilitated the exploration of stress distributions and deformation characteristics. Our investigation unveiled the maximal and minimal principal stresses alongside total deformation experienced by the tynes. Notably, while the maximum stress approached the material's yield point, it consistently remained within acceptable thresholds, signifying that the resultant deformation did not induce failure. This study underscores the pivotal role of employing FEM analysis in both the design and assessment phases of agricultural machinery development, thereby augmenting durability and operational efficacy. Ultimately, such initiatives aim to furnish manufacturers with invaluable insights to bolster the structural integrity and longevity of cultivators, fostering enhanced reliability and operational efficiency within the agricultural sector.

Author 1: Jing Yang

Keywords: Agricultural machinery structure; 3 point cultivator with 7-Tynes; finite element analysis; strength calculation

PDF

Paper 72: Constructing Knowledge Graph in Blockchain Teaching Program Using Formal Concept Analysis

Abstract: The rapid evolution of blockchain technology calls for innovative educational frameworks to effectively convey its complex principles and applications. This paper investigates the use of Formal Concept Analysis (FCA) for constructing knowledge graphs as part of a blockchain teaching program. FCA, grounded in lattice theory, provides a mathematical foundation for analyzing relationships between concepts, making it an ideal tool for organizing and visualizing knowledge structure within blockchain education. This study aims to develop an interactive, context-based graph that captures the intricate interrelations among blockchain topics. The methodology includes mapping key blockchain concepts and their applications into a structured graph, which enhances both the understanding and the systematic delivery of educational content. The research demonstrates that FCA not only facilitates the creation of scalable and adaptable educational materials but also enhances students' conceptual understanding by presenting the interconnected nature of blockchain concepts in an accessible format. Knowledge graph aids in identifying interconnected learning outcomes that cover overlapping subjects. It serves as a valueable resource for educators focusing on cryptocurrencies, making it easier to create a thorough list of key topics related to particular cryptocurrency characteristics.

Author 1: Madina Mansurova
Author 2: Assel Ospan
Author 3: Dinara Zhaisanova

Keywords: Knowledge graph; formal concept analysis; blockchain education; curriculum optimization; interactive learning tools

PDF

Paper 73: Analysis of Influencing Factors of Tourist Attractions Accessibility Based on Machine Learning Algorithm

Abstract: Tourist attractions, defined by their cultural importance, aesthetic appeal, and recreational possibilities, are critical to the tourism industry. However, precisely evaluating tourism needs remains a difficult task, and research in this field is scarce. This research introduces an innovative remora-optimized adaptive XGBoost (RO-AXGBoost) model for predicting accessibility factors for tourist attractions. Data was obtained from Kaggle, and the suggested method was executed in Python. The RO-AXGBoost model's effectiveness was assessed utilizing metrics like Mean Absolute Percentage Error (MAPE) of 7.24, Mean Absolute Error (MAE) of 7.321, Root Mean Square Error (RMSE) of 10.241, and R-squared (R²) of 85.7%. The results show that the RO-AXGBoost model surpasses conventional approaches by effectively discovering important determinants that have an important impact on the accessibility of tourist attractions.

Author 1: Na Liu
Author 2: Hai Zhang

Keywords: Tourist attractions; factors; tourism; remora optimized adaptive XGBoost (RO-AXGBoost)

PDF

Paper 74: A Smart Contract Approach for Efficient Transportation Management

Abstract: Transportation management in Egypt faces challenges such as congestion, inefficiency, and a lack of transparency. This work proposes a smart contract-based transportation framework to address these issues and enhance the efficiency of Egypt's transportation system. By leveraging blockchain technology, smart contracts can facilitate and enforce decentralized and immutable transportation agreements. This approach also fosters increased trust among stakeholders and improves interactions between service providers. This paper presents a conceptual framework that integrates smart contracts, blockchain technology, GPS data, and sensor technologies to further optimize transportation operations. Empirical analysis and case studies demonstrate the effectiveness of smart contracts in improving the shipping registration system. The survey results show that smart contracts streamline processes enhance data security, reduce costs, and improve accuracy. The proposed model, developed on the NEAR platform, outperforms traditional methods and Ethereum-based models by offering faster registration, better cost-efficiency, and improved transaction tracking. This demonstrates the potential for modernizing and optimizing Egypt’s transportation sector.

Author 1: Abdullah Alshahrani
Author 2: Ayman Khedr
Author 3: Mohamed Belal
Author 4: Mohamed Saleh

Keywords: Blockchain; cryptography; logistics; smart contracts; transportation; security and privacy; supply chains

PDF

Paper 75: Enhancing Educational Outcomes Through AI Powered Learning Strategy Recommendation System

Abstract: In order to develop intelligent learning recommendation systems, the work identifies the employment of artificial intelligence (AI) techniques, particularly in the educational data mining (EDM) field. The aggregation of such educational data into an efficient analytical system could also assist as an interesting means of education for the students. In fact, it could ultimately advance the direction of education. Sophisticated machine learning methods were employed to analyze various data types, including educational, socioeconomic, and demographic data, to predict student success. In this research, Logistic Regression (LR), Random Forest (RF), Support Vector Machines (SVM), CatBoost, and XGBoost algorithms were considered to build prediction models using a dataset encompassing a wide range of student traits. Robust evaluation metrics, including precision, recall, accuracy, and F1-score, were used to gauge model effectiveness. The results highlighted that RF was the best with accuracy, precision, and recall. Then, a rule engine was built to enhance the system by finding the most efficient learning tactics for students based on their expected future performance. The proposed AI-based personalized recommendation tool shows a substantial step towards enhancing educational decisions. This solution facilitates educators in creating student academic assistance interventions by offering individualized, data-driven learning strategies.

Author 1: Daminda Herath
Author 2: Chanuka Dinuwan
Author 3: Charith Ihalagedara
Author 4: Thanuja Ambegoda

Keywords: Artificial intelligence; educational data mining; educational strategies; machine learning; personalized recommendation; student performance prediction

PDF

Paper 76: Implementation of Lattice Theory into the TLS to Ensure Secure Traffic Transmission in IP Networks Based on IP PBX Asterisk

Abstract: This paper presents a novel lattice-based cryptography implementation in the Transport Layer Security (TLS) protocol to enhance the security of traffic transmission in IP networks that use the Asterisk IP PBX platform. Given the growing threat of quantum computing, traditional cryptographic methods are becoming increasingly vulnerable. To address this issue, the study leverages post-quantum cryptography by developing a modified TLS protocol using lattice-based cryptographic algorithms. The performance of the system was evaluated in terms of security, computational efficiency, and real-time communication. The study shows that the proposed lattice-based TLS implementation effectively secures traffic transmission in IP PBX networks, offering a robust solution against both current and future quantum threats.

Author 1: Olga Abramkina
Author 2: Mubarak Yakubova
Author 3: Tansaule Serikov
Author 4: Yenlik Begimbayeva
Author 5: Bakhodyr Yakubov

Keywords: IP; PBX; Asterisk; TLS; MITM; post-quantum cryptography

PDF

Paper 77: Energy Optimization Management Scheme for Manufacturing Systems Based on BMAPPO: A Deep Reinforcement Learning Approach

Abstract: To address the depletion of traditional energy sources and the increasingly severe environmental pollution, countries around the world have accelerated the deployment of renewable energy generation equipment. Energy optimization management for microgrids can address the randomness of factors such as renewable energy generation and load, ensuring the safe and stable operation of the system while achieving objectives such as cost minimization. Therefore, this paper conducts an in-depth study of energy optimization management schemes for microgrids and designs a multi-microgrid energy optimization management model and algorithm based on deep reinforcement learning. For the joint optimization problem among multiple microgrids with power flow between them, a two-layer energy optimization management scheme based on the multi-agent proximal policy optimization (PPO) algorithm and optimal power flow (BMAPPO) is proposed. This scheme is divided into two layers: first, the lower layer uses the multi-agent proximal policy optimization algorithm to determine the output of various controllable power devices in each microgrid; then, based on the lower layer's optimization results, the upper layer uses a second-order cone relaxation optimal power flow model to solve the optimal power flow between multiple microgrids, achieving power scheduling among them; finally, the total cost of the upper and lower layers is calculated to update the network parameters. Experimental results show that compared with other schemes, the proposed scheme achieves multi-microgrid energy optimization management at the lowest cost while ensuring online execution speed.

Author 1: Zhe Shao

Keywords: Microgrid; energy optimization management; deep reinforcement learning; multi-agent; Proximal Policy Optimization (PPO)

PDF

Paper 78: Design Science Research: Applying Integrated Fogg Persuasive Frameworks to Validate Rural ICT Design Requirements

Abstract: Designing for digital equality is critical in the modern world. Digital inequality is more pronounced in rural areas, where the majority are illiterate and poor. As a result of this, individuals are not motivated, enabled and triggered enough to access and use Information and Communication Technologies. Additionally, existing rural ICT artifacts or applications are not usable by these demographics. Therefore, this paper understood and validated rural ICT design requirements. It achieved this by developing a community learning system, applying design science research methodology and integrated Fogg persuasive frameworks. Results shows that use of local language, local content, videos, audio, touch-based input, proper content categorization, accessibility (location) and peer participation and collaboration fosters user engagement with the ICT artifact. These approaches had a significant impact in the achievement of user self-efficacy. This is explained by the task findings, 71%, 83%, 70% and 78% successfully, within the stipulated time and on their own, accomplished tasks 1, 2,3 and 4 respectively. Users found the content to be practical and applicable to their day-to-day activities. Users appreciated the system’s potential for learning indicating that it could significantly enhance their knowledge and skills. The significance of Design Science Research and integrated Fogg persuasive frameworks in creating usable and accessible ICT solutions tailored to the needs of the target population cannot be underrated. It was concluded that design solutions targeting vulnerable demographics are key to the success of designs for digital equality. In other words, usable solutions for the aged, women illiterate, uneducated and the poor, are more usable for the young, men, literate, educated, and the rich (financially stable). Thus, enhancing inclusivity in access and use of rural ICTs.

Author 1: Noela Jemutai Kipyegen
Author 2: Benard Okelo

Keywords: Digital equality; rural areas; design requirements; rural ICT; artifacts; validate; Fogg frameworks; design science research

PDF

Paper 79: Brain Tumor Segmentation of Magnetic Resonance Imaging (MRI) Images Using Deep Neural Network Driven Unmodified and Modified U-Net Architecture

Abstract: Accurately separating healthy tissue from tumorous regions is crucial for effective diagnosis and treatment planning based on magnetic resonance imaging (MRI) data. Current manual detection methods rely heavily on human expertise, so MRI-based segmentation is essential to improving diagnostic accuracy and treatment outcomes. The purpose of this paper is to compare the performance of detecting brain tumors from MRI images through segmentation using an unmodified and modified U-Net architecture from deep neural network (DNN) that has been modified by adding batch normalization and dropout on the encoder layer with and without the freeze layer. The study utilizes a public 2D brain tumor dataset containing 3064 T1-weighted contrast-enhanced images of meningioma, glioma, and pituitary tumors. Model performance was evaluated using intersection over union (IoU) and standard metrics such as precision, recall, f1-score, and accuracy across training, validation, and testing stages. Statistical analysis, including ANOVA and Duncan's multiple range test, was conducted to determine the significance of performance differences across the architectures. Results indicate that while the modified architectures show improved stability and convergence, the freeze layer model demonstrated superior IoU and efficiency, making it a promising approach for more accurate and efficient brain tumor segmentation. The comparison of the three methods revealed that the modified U-Net architecture with a freeze layer significantly reduced training time by 81.72% compared to the unmodified U-Net while maintaining similar performance across validation and testing stages. All three methods showed comparable accuracy and consistency, with no significant differences in performance during validation and testing.

Author 1: Nunik Destria Arianti
Author 2: Azah Kamilah Muda

Keywords: Accuracy; brain tumor; DNN; U-Net architecture; comparison performance

PDF

Paper 80: Indoor Landscape Design and Environmental Adaptability Analysis Based on Improved Fuzzy Control

Abstract: With the increasing demand for automation and intelligence in indoor landscape design, exploring efficient and precise control strategies has become particularly important. Robot-assisted technology and A* algorithm are utilized for indoor environment localization and mapping. Then, type-2 fuzzy adaptive fuzzy control is applied for indoor landscape automatic design. An improved genetic algorithm is utilized for environmental analysis to enhance the adaptability of indoor landscape design to the environment. In the results, the robot adopting this algorithm was significantly better than ordinary robots in path planning optimization, with a fitting accuracy of over 95%. The type-2 fuzzy control model had a maximum speed of 0.75m/s and an overshoot of only 7.1% for balancing robots, resulting in a faster recovery speed and smaller overshoot. The proposed method performed the best in terms of functionality, aesthetics, technicality, accessibility, and user satisfaction for landscape design effectiveness and environmental adaptability. The research improves indoor landscape design’s automation. Meanwhile, the combination of fuzzy control and genetic algorithms enhances the design accuracy and environmental adaptability. This provides a new technological path for indoor landscape design.

Author 1: Jinming Liu
Author 2: Qian Hu
Author 3: Pichai Sodbhiban

Keywords: Fuzzy control; indoor landscape design; environment; adaptability analysis; robot assisted

PDF

Paper 81: Optimising Delivery Routes Under Real-World Constraints: A Comparative Study of Ant Colony, Particle Swarm and Genetic Algorithms

Abstract: Effective logistics systems are essential for fast and economical package delivery, especially in urban areas. The intricate and ever-changing nature of urban logistics makes traditional methods insufficient. Hence, requirements for the application of sophisticated optimisation techniques have increased. To optimise package delivery routes, this study compares the performance of three popular evolutionary algorithms: ant colony optimisation (ACO), particle swarm Optimisation (PSO), and genetic algorithms (GA). Finding the best algorithm to minimise delivery time and cost while taking into account real-world limitations, such as delivery priority. This guarantees that deliveries with a higher priority are prioritised over others, which may substantially impact route optimisation. We examine each algorithm to create the best possible route plans for delivery trucks using actual data. Several factors are employed to assess each algorithm’s performance, including robustness to changes in environmental variables and computational efficiency—the simulation models delivery demands using actual data. Results indicate that ACO performed better in Los Angeles and Chicago, completing the shortest routes with respective distances of 126,254.18 and 59,214.68, indicating a high degree of flexibility in intricate urban layouts. With the best distance of 48,403.1 in New York, on the other hand, GA achieve good results, demonstrating its usefulness in crowded urban settings. These results highlight how incorporating evolutionary algorithms into urban logistics can improve sustainability and efficiency.

Author 1: Rneem I. Aldoraibi
Author 2: Fatimah Alanazi
Author 3: Haya Alaskar
Author 4: Abed Alanazi

Keywords: Evolutionary algorithms; genetic algorithm; particle swarm optimisation; ant colony optimisation; urban logistics; route optimisation

PDF

Paper 82: Volleyball Motion Analysis Model Based on GCN and Cross-View 3D Posture Tracking

Abstract: The tracking of motion targets occupies a central position in sports video analysis. To further understand athletes' movements, analyze game strategies, and evaluate sports performance, a 3D posture estimation and tracking model is designed based on Graphical Convolutional Neural Network and the concept of "cross-vision". The outcomes revealed that the loss function curve of the 3D tracking model designed for the study had the fastest convergence with a minimum convergence value of 0.02. The average precision mean values for the four different publicly available datasets were above 0.90. The maximum improvement reached 21.06% and the minimum average absolute percentage error was 0.153. The higher order tracking accuracy of the model reached 0.982. Association intersection over union was 0.979. Association accuracy and detection accuracy were 0.970 and 0.965 respectively. During the volleyball video analysis, the tracking accuracy and tracking precision reached 89.53% and 90.05%, respectively, with a tracking speed of 33.42 fps. Meanwhile, the method's trajectory tracking completeness was always maintained at a high level, with its posture estimation correctness reaching 0.979. Mostly tracked and mostly lost confirmed the tracking ability of the method in a long time and cross-view state with high model robustness. This study helps to promote the development and application of related technologies, promote the intelligent development of volleyball in training, competition and analysis, and improve the efficiency of the sport and the level of competition.

Author 1: Hongsi Han
Author 2: Jinming Chang

Keywords: Graphical Convolutional Neural Network; posture estimation; volleyball; motion analysis model; 3D tracking

PDF

Paper 83: A DECOC-Based Classifier for Analyzing Emotional Expressions in Emoji Usage on Social Media

Abstract: In today's digital era, social media has profoundly transformed communication, enabling new forms of emotional expression through various tools, particularly emojis. Initially created to represent simple emotions, emojis have evolved into a rich and nuanced visual language capable of conveying complex emotional states. While their role in communication is well-documented, there remains a gap in effectively analyzing and interpreting the emotional subtleties conveyed through emojis. This paper presents an innovative approach to sentiment analysis that goes beyond conventional methods by integrating a machine learning model, specifically the DECOC (Error Correcting Output Codes) classifier, tailored for the combined analysis of text and emoji sequences. The proposed model addresses the limitations of existing methods, which often overlook the sequential and contextual nature of emojis in emotional expression. By applying this model to real-world data, including a survey of social media users in Saudi Arabia, we demonstrate its high efficacy, achieving an average accuracy of 94.76%. This result not only outperforms prior models but also validates the significance of treating emojis as fundamental components of digital sentiment analysis. Our findings underscore the critical need for advanced models to decode the emotional layers of emoji usage, offering deeper insights into their role in contemporary digital communication.

Author 1: Shaya A. Alshaya

Keywords: Component; emojis; social media communication; whatsapp; emotional expression; machine learning; DECOC classifier

PDF

Paper 84: A Smart IoT System for Enhancing Safety in School Bus Transportation

Abstract: School districts globally implement comprehensive and expensive strategies to offer safe bus transportation to and from school. However, these technologies are unfeasible to schools with limited financial resources, thereby leaving students at risk of serious injury. This study focuses on five major obstacles to safe school transportation: 1) forgetting students on the bus unattended; 2) students’ abnormal behavior; 3) overcrowding; 4) abnormal driver behavior; and 5) the risk of a bus running over children after they have disembarked. This paper developed an intelligent system using the Internet of Things, including rule-based and mathematical solutions to overcome the five transportation safety issues in student buses mentioned above, and to enhance student safety, using a bracelet system, short- and long-RFID sensors, and a processing unit to monitor the bus and its surrounding area. Therefore, in this paper, the proposed solution is superior to previous works in the same field. It is distinguished by its comprehensiveness and reasonable cost, making it affordable and easy to both install and maintain.

Author 1: Yousef H. Alfaifi
Author 2: Tareq Alhmiedat
Author 3: Emad Alharbi
Author 4: Ahad Awadh Al Grais
Author 5: Maha Altalk
Author 6: Abdelrahman Osman Elfaki

Keywords: IoT; safety; RFID; school bus; transportation

PDF

Paper 85: Machine Learning Approaches Applied in Smart Agriculture for the Prediction of Agricultural Yields

Abstract: Machine learning techniques in smart agriculture for yield prediction involve using algorithms to analyze historical and real-time data to forecast crop yields. These approaches aim to optimize agricultural practices, improve resource efficiency and enhance productivity, this paper reviews the application of machine learning techniques in smart agriculture for predicting agricultural yields. With the advent of data-driven technologies, machine learning algorithms have become instrumental in analyzing vast amounts of agricultural data to forecast crop yields accurately. Various machine learning models such as regression, classification, and ensemble methods have been employed to process historical and real-time data on weather patterns, soil conditions, crop types, and farming practices. These models enable farmers and stakeholders to make informed decisions, optimize resource allocation, and mitigate risks associated with agricultural production. Furthermore, the integration of Internet of Things devices and remote sensing technologies has facilitated data collection and improved the precision of yield predictions, this paper discusses the key machine learning approaches, challenges, and future directions in leveraging data analytics for enhancing agricultural productivity and sustainability in smart farming systems. to ensure stability and tracking. Simulations is carried out to verify the theoretical results, The study found that different machine learning techniques had varying accuracy for predicting agricultural yields. ViT-B16 achieved the highest F1-SCORE (99.40%), followed by ResNet-50 (99.54%) and CNN (97.70%), while RPN algorithms had lower accuracy (91.83%). Correlation analysis showed a strong positive relationship between humidity and soil moisture, favoring crop growth, while production had minimal correlation with temperature and area. The AdaBoost Regressor was the best performer, with the lowest MAE (0.22), MSE (0.1), and RMSE (0.31), and Random Forest showed strong predictive power with an R2 score of 0.89, Seasonal data indicated that autumn had the highest agricultural production, followed by spring, while summer and winter had much lower yields due to weather conditions. Seasonal temperature variations from 1997 to 2014 showed autumn was the warmest (34.43°C), boosting crop production, and winter the coldest (34.31°C), reducing yields. These temperature shifts significantly impacted agricultural productivity, with warm seasons enhancing growth and extreme temperatures in summer and winter limiting it, machine learning techniques in smart agriculture are pivotal for predicting crop yields by leveraging historical and real-time data, thus optimizing practices and resource use while boosting productivity. This involves deploying diverse machine learning models like regression, classification, and ensembles to analyze extensive data on weather, soil, crops, and farming methods. Such models empower stakeholders with insights for informed decisions, efficient resource allocation, and risk mitigation in agricultural operations. The integration of Internet of Things and remote sensing further refines data accuracy, aiding precise yield predictions. Despite advancements, challenges persist, including data quality assurance, model complexity, scalability, and interoperability, driving ongoing research and simulations to validate and improve ML applications for sustainable and productive smart farming systems.

Author 1: Abourabia. Imade
Author 2: Ounacer. Soumaya
Author 3: Elghoumari. Mohammed yassine
Author 4: Azzouazi. Mohamed

Keywords: Machine learning; IOT; artificial intelligence; agricultural yields; smart agriculture; CNN; ViT-B16

PDF

Paper 86: K-Means and Morphology Based Feature Element Extraction Technique for Clothing Patterns and Lines

Abstract: In the process of clothing design and production, the traditional artificial feature element extraction method has the problems of low efficiency and insufficient precision, which is difficult to meet the automation and intelligent needs of modern clothing industry. In order to solve this problem, this paper proposes a technology that combines K-means clustering algorithm and morphology method to extract clothing pattern and line feature elements. This technology uses K-means clustering algorithm to preprocess clothing images to realize feature extraction of clothing pattern elements, and then introduces morphology method to realize feature extraction of image line elements. This technology not only improves the accuracy and efficiency of feature element extraction, but also retains the details of clothing images, which provides a strong support for automatic and intelligent processing in clothing design and production.

Author 1: Xiaojia Ding

Keywords: K-means; morphological algorithm; feature extraction

PDF

Paper 87: Simulation Analysis of Obstacle Crossing Stability for Transmission Line Inspection Robot

Abstract: As an indispensable energy source in production and daily life, electricity has important implications in the operation of society and economic development. As the hub of power transmission, the safety of transmission lines is related to the stability of the power grid. Regular inspection of transmission lines is an effective measure to ensure the stability of the power system. Patrol robots are used for regular inspection of transmission lines due to their advantages such as low cost and long running time. To achieve collision free, the study proposes an obstacle path planning algorithm based on an improved bidirectional fast expanding random tree through kinematic analysis of the robot. According to the experimental results, when overtaking the damper, the rotation ranges of the 1st claw/arm and the 2nd bracket/arm/claw were (0°~22°), (-50°, 10°), (0°, 25°), (-50°, 10°), and (0°, 22°), respectively. The corresponding rotational speeds were (-1.5~1.5) deg/s, (-3, 2.5) deg/s, (-3.5~3.5) deg/s, (-2.5, 3) deg/s, and (-27, 2) deg/s, respectively. The expansion and contraction ranges of the upper, middle, lower, and horizontal push rods were (0, 100) mm, (0, 110) mm, (-60, 10) mm, and (0, 20) mm, respectively. From the above results, when crossing obstacles, the motion acceleration of the inspection robot is not significant. The speed changes smoothly. The obstacle crossing path planning algorithm proposed in the study can achieve stable motion of the inspection robot.

Author 1: Qianli Wang

Keywords: Transmission line; inspection robot; obstacle crossing path; kinematic analysis; bidirectional fast expanding random tree

PDF

Paper 88: Precision Machining of Hard-to-Cut Materials: Current Status and Future Directions

Abstract: Machining difficult materials like superalloys, ceramics, and composites is fundamental in industries where performance is paramount, such as the auto industry, aerospace, and medicine. These materials with relatively high strength, hardness, and high-temperature capabilities pose difficulties in machining, thus calling for improved precision machining technologies. This survey paper presents a detailed review of the current state of the art of precision machining of these difficult materials, along with advances observed in tools for cutting, machining techniques, and new technologies. They range from carbide, ceramics, super hard tools, and geometry of tools, and this topic also deals with tool coatings. The article also discusses specifics of the traditional and nontraditional machining processes: turning, milling, electrical discharge, and laser machining, as well as the relations between additive and hybrid manufacturing. The importance of new technologies or digital and intelligent manufacturing systems in enhancing the accuracy and productivity of machining is also illustrated. Furthermore, the paper also provides information on how digital and intelligent manufacturing technologies can enhance machining efficiency and accuracy. Moreover, future research will aim to minimize tool wear, enhance surface finish and integrity, and environmentally conscious machining. The paper concludes with a hopeful note on the potential of future research to revolutionize the precision machining industry, offering high performance and reliability in critical applications while maintaining a focus on sustainability.

Author 1: Tengjiao CUI

Keywords: Precision machining; hard-to-cut materials; cutting tools; machining processes; emerging technologies

PDF

Paper 89: Analyzing VGG-19’s Bias in Facial Beauty Prediction: Preference for Feminine Features

Abstract: From an evolutionary perspective, sexual dimorphism has been linked to perceived attractiveness, with masculine traits preferred in men and feminine traits in women. Moreover, symmetry is a strong predictor of facial attractiveness across both sexes. Recent advancements in the field of artificial intelligence have enabled algorithms to accurately predict facial attractiveness. This study aims to investigate whether these algorithms accurately replicate human judgments of attractiveness. We hypothesized that sexually dimorphic manipulations (masculinized men and feminized women) (H1), as well as symmetrized versions (H2), would elicit higher attractiveness ratings from a facial beauty prediction algorithm. Employing transfer learning, we trained six deep-learning models using four facial databases with attractiveness ratings (n = 6848). The top-performing model, VGG-19, demonstrated a high prediction correlation of .86 on the test set. Surprisingly, our findings revealed an interaction effect between sex and sexual dimorphism. Feminized versions of both men’s and women’s faces obtained higher attractiveness ratings than their masculinized counterparts. For symmetry, our results indicated that symmetrized faces were perceived as more attractive, albeit exclusively among women. These findings offer novel insights into the understanding of facial attractiveness from both algorithmic and human behavioral perspectives.

Author 1: Nuno Fernandes
Author 2: Sandra Soares
Author 3: Joana Arantes

Keywords: Deep learning; facial attractiveness; sexual dimorphism; symmetry; VGG-19

PDF

Paper 90: Real-Time Self-Localization and Mapping for Autonomous Navigation of Mobile Robots in Unknown Environments

Abstract: This paper delves into the progressive design and operational capabilities of advanced robotic platforms, highlighting their adaptability, precision, and utility in diverse industrial settings. Anchored by a robust modular design, these platforms integrate sophisticated sensor arrays, including LiDAR for enhanced spatial navigation, and articulated limbs for complex maneuverability, reflecting significant advancements in automation technology. We examine the architectural intricacies and technological integrations that enable these robots to perform a wide range of tasks, from material handling to intricate assembly operations. Through a detailed analysis of system configurations, we assess the implications of such technologies on efficiency and customization in automated processes. Furthermore, the paper discusses the challenges associated with the deployment of advanced robotics, including the complexities of system integration, maintenance, and the steep learning curve for operational proficiency. We also explore future directions in robotic development, emphasizing the potential integration with emerging technologies such as artificial intelligence, the Internet of Things, and augmented reality, which promise to elevate autonomous decision-making and improve human-robot interaction. This comprehensive review aims to provide insights into the current capabilities and future prospects of robotic systems, offering a perspective on how ongoing innovations may reshape industrial practices, enhance operational efficiency, and redefine the landscape of automation technology.

Author 1: Serik Tolenov
Author 2: Batyrkhan Omarov

Keywords: Robotic platforms; automation technology; LiDAR navigation; system integration; artificial intelligence; Internet of Things; human-robot interaction

PDF

Paper 91: Optimizing LSTM-Based Model with Ant-Lion Algorithm for Improving Thyroid Prognosis

Abstract: In the healthcare sector, early and accurate disease detection is essential for providing appropriate care on time. This is especially crucial in thyroid problems, which can be difficult to diagnose because of their many symptoms. This study aims to propose a new thyroid disease prediction model by utilizing the Ant Lion Optimization (ALO) approach to enhance the hyperparameters of the Long Short-Term Memory (LSTM) deep learning algorithm. To achieve this, after the preprocessing step, we utilize the entropy technique for feature selection, which selects the most important features as an optimal subset of features. The ALO is then employed to optimize the LSTM, identifying the optimal hyperparameters that can influence the model and enhance its efficiency. To assess the suggested methodology, we chose the widely used thyroid disease data. This dataset contains 9,172 samples and 31 features. A set of criteria was used to evaluate the model’s performance, including accuracy, precision, recall, and F1 score. The experimental results showed that: 1) the entropy technique in the feature selection step can reduce the total number of features from 31 to 10; 2) the recommended strategy, which selected the optimal hyperparameter for the LSTM using the Alo algorithm, improved the classifier overall by 7.2% and produced the highest accuracy of 98.6%.

Author 1: Maria Yousef

Keywords: Thyroid disease; LSTM; ALO; prediction model; optimization algorithm

PDF

Paper 92: Balancing Privacy and Performance: Exploring Encryption and Quantization in Content-Based Image Retrieval Systems

Abstract: This paper presents three significant contributions to the field of privacy-preserving Content-Based Image Retrieval (CBIR) systems for medical imaging. First, we introduce a novel framework that integrates VGG-16 Convolutional Neural Network with a multi-tiered encryption scheme specifically designed for medical image security. Second, we propose an innovative approach to model optimization through three distinct quantization methods (max, 99% percentile, and KL divergence), which significantly reduces computational overhead while maintaining retrieval accuracy. Third, we provide comprehensive empirical evidence demonstrating the framework's effectiveness across multiple medical imaging modalities, achieving 94.6% accuracy with 99% percentile quantization while maintaining privacy through encryption. Our experimental results, conducted on a dataset of 1,200 medical images across three anatomical categories (lung, brain, and bone), show that our approach successfully balances the competing demands of privacy preservation, computational efficiency, and retrieval accuracy. This work represents a significant advancement in making secure CBIR systems practically deployable in resource-constrained healthcare environments.

Author 1: Mohamed Jafar Sadik
Author 2: Noor Azah Samsudin
Author 3: Ezak Fadzrin Bin Ahmad

Keywords: Content-Based Image Retrieval (CBIR); Convolutional Neural Networks (CNN): Encrypted data; Feature extraction; Fully Homomorphic Encryption (FHE); medical imaging; privacy; quantization; retrieval accuracy

PDF

Paper 93: Tracking Computer Vision Algorithm Based on Fusion Twin Network

Abstract: Deep learning technology has promoted the rapid development of visual object tracking, among which algorithms based on twin networks are a hot research direction. Although this method has broad application prospects, its performance is often greatly reduced when encountering target occlusion or similar objects in the background. In response to this issue, a method is proposed to integrate channel and spatial dimension attention mechanisms into the backbone architecture of twin networks, to optimize the algorithm's recognition accuracy for tracking targets and its stability in changing environments. Then, a region recommendation network based on adaptive anchor box generation is adopted, combined with twin networks to enhance the network's modeling ability for complex situations. Finally, a new visual tracking algorithm is designed. Through comparative experiments, the success rate of the former increased by 0.6% and 0.9% respectively on the two datasets, and its accuracy also increased by 1.2% and 1.8% accordingly. The success rate of the latter increased by 1.5% and 1.2% respectively in the two datasets, and the accuracy also increased by 1.2% and 0.6% respectively. From this, the improved algorithm can improve the performance of target tracking and has certain application potential in visual target tracking.

Author 1: Xin Wang

Keywords: Visual tracking; twin network; integration; attention mechanism; self-adaption

PDF

Paper 94: Graph Neural Networks and Dominant Set Algorithms for Energy-Efficient Internet of Things Environments: A Review

Abstract: The widespread usage of Internet of Things (IoT) devices opens up new opportunities for automated operations, monitoring, and communications across various industries. However, extending the lifespan of IoT networks remains crucial because IoT devices are energy-limited. This study investigates the convergence of Graph Neural Networks (GNNs) and dominant set algorithms to extend the longevity of IoT networks. GNNs are neural networks that capture complex relationships and node interactions based on graph-structured data. With these capabilities, GNNs are extremely effective at modeling IoT network dynamics, where devices are connected and whose interactions have a significant impact on performance. In contrast, dominant set algorithms are defined as an approach in which nodes of a network function as agents or leaders to perform resource-efficient and resource-distributed communication. A further detailed overview leverages existing techniques to describe GNNs' role in optimizing dominant set algorithms and discusses integrating these technologies into addressing energy efficiency challenges in IoT settings.

Author 1: Dezhi Liao
Author 2: Xueming Huang

Keywords: Internet of Things; energy efficiency; dominant set; Graph Neural Networks

PDF

Paper 95: Development of Traffic Light and Road Sign Detection and Recognition Using Deep Learning

Abstract: Traffic light and road sign violations significantly contribute to traffic accidents, particularly at intersections in high-density urban areas. To address these challenges, this research focuses on enhancing the accuracy, robustness, and reliability of Autonomous Vehicle (AV) perception systems using advanced deep learning techniques. The novelty of this study lies in the comprehensive development and evaluation of real-time traffic light and road sign detection systems, comparing state-of-the-art models including YOLOv3, YOLOv5, and YOLOv7. The models were rigorously tested in a controlled offline environment using the Nvidia Titan RTX, followed by extensive field testing on an AV test vehicle equipped with sensor suite and Nvidia RTX GPU. The testing was conducted across complex urban driving scenarios at the CETRAN proving test track, JTC Cleantech Park, and NTU Singapore campus. The traffic light detection and recognition (TLR) results demonstrate that YOLOv7 outperforms YOLOv5 and YOLOv3, achieving a mean Average Precision (mAP@0.5) of 93%, even under challenging conditions like poor lighting and occlusions. While the traffic road sign detection (TSD) mAP@0.5 of 96%. This superior performance highlights the potential of YOLOv7 in enhancing AV safety and reliability. The conclusions underscore the effectiveness of YOLOv7 for real-time detection in AV perception systems, offering crucial insights for future research. Potential implications include the development of more robust and accurate AV systems, capable of safely navigating complex urban environments.

Author 1: Joseph M. De Guia
Author 2: Madhavi Deveraj

Keywords: Artificial intelligence; autonomous vehicle; traffic light recognition; road sign detection; YOLO; real-time object detection

PDF

Paper 96: Smart X-Ray Geiger Data Logger: An Integrated System for Detection, Control, and Dose Evaluation

Abstract: X-ray dosimetry practices are guided by international standards and regulatory agencies to ensure the safety of patients, radiation workers, and the general public. This paper introduces the Smart X-ray Geiger Data Logger, a comprehensive system designed to enhance radiation safety through integrated detection, control, and dose evaluation. This study is based on the M4011 Geiger-Müller tube, exploiting ionization effects to measure radiation doses accurately. The system features an advanced algorithm for real-time exposure risk assessment, ensuring adherence to safety limits during medical procedures. Equipped with Wi-Fi connectivity, the device facilitates seamless data transmission and integration with centralized databases for comprehensive exposure monitoring and historical data analysis. The MQTT protocol is utilized for secure and efficient data transmission, ensuring the protection of sensitive information. A user-friendly interface provides instant feedback on radiation levels, cumulative doses, and procedural safety, supported by visual indicators and auditory alarms for immediate alerts. Experimental validation demonstrates the system's reliability in various settings, confirming its utility in optimizing radiation protection strategies and fostering safer environments in the healthcare field.

Author 1: Lhoucine Ben Youssef
Author 2: Abdelmajid Bybi
Author 3: Hilal Drissi
Author 4: El Ayachi Chater

Keywords: X-rays; radiation dose; radiation safety; exposure risk assessment; Geiger-Müller tube; medical imaging; real-time monitoring; smart devices

PDF

Paper 97: A Feature Map Adversarial Attack Against Vision Transformers

Abstract: Image classification is a domain where Deep Neural Networks (DNNs) have demonstrated remarkable achievements. Recently, Vision Transformers (ViTs) have shown potential in handling large-scale image classification challenges by efficiently scaling to higher resolutions and accommodating larger input sizes compared to traditional Convolutional Neural Networks (CNNs). However, in the context of adversarial attacks, ViTs are still considered vulnerable. Feature maps serve as the foundation for representing and extracting meaningful information from images. While CNNs excel at capturing local features and spatial relationships, ViTs are better at understanding global context and long-range dependencies. This paper proposes a feature map ViT-specific adversarial example attack called Feature Map ViT-specific Attack (FMViTA). The objective of the investigation is to generate adversarial perturbations in the spatial and frequency domains of the image representation that allow deeper distance measurement between perturbed and targeted images. The experiments focus on a ViT pre-trained model that is fine-tuned on the ImageNet dataset. The proposed attack demonstrates the vulnerability of ViTs to adversarial examples by showing that even allowing only 0.02 maximum perturbation magnitude to be added to the input samples gives 100% attack success rate.

Author 1: Majed Altoub
Author 2: Rashid Mehmood
Author 3: Fahad AlQurashi
Author 4: Saad Alqahtany
Author 5: Bassma Alsulami

Keywords: Vision transformers; adversarial attacks; DNNs; vulnerabilities; feature maps; perturbations; spatial domains; frequency domains

PDF

Paper 98: Backbone Feature Enhancement and Decoder Improvement in HRNet for Semantic Segmentation

Abstract: Addressing issues such as the tendency for small-scale objects to be lost, incomplete segmentation of large-scale objects, and overall low segmentation accuracy in existing semantic segmentation models, an improved HRNet network model is proposed. Firstly, by introducing multi-branch deep stripe convolutions, features of multi-scale objects are adaptively extracted using convolutional kernels of different sizes, which not only enhances the model’s ability to capture multi-scale objects but also strengthens its perception of the contextual environment. Secondly, to optimize the feature aggregation effect, the axial attention mechanism is adopted to aggregate image features along the x-axis and y-axis directions respectively, effectively capturing long-range dependencies within the global scope, and thus achieving precise positioning of objects of interest in the feature map.Finally, by implementing the progressive fusion-based upsampling strategy, it facilitates the complementary fusion of semantic information and detailed information between adjacent feature maps, thereby enhancing the model’s capability to restore fine-grained details in images. Experimental results demonstrate that on the PASCAL VOC2012+SBD dataset, the mean Intersection over Union (mIoU) of the improved HRNet S model in segmenting lower-resolution images is increased by 1.54% compared to the baseline method. Meanwhile, the improved HRNet L model achieved a 3.05% increase in mIoU compared to the original model when handling higher-resolution image segmentation tasks on the Cityscapes dataset, and attained the highest segmentation accuracy in 15 out of the 19 different scale classification categories on this dataset.These results indicate that the proposed method not only exhibits high segmentation accuracy but also possesses strong adaptability to multi-scale objects.

Author 1: HanLei Feng
Author 2: TieGang Zhong

Keywords: Semantic segmentation; HRNet; multi-branch deep strided convolution; axial attention mechanism; progressive fusion upsampling; multi-scale object adaptability

PDF

Paper 99: Machine Learning Approach to Identify Promising Mountain Hiking Destinations Using GIS and Remote Sensing

Abstract: The objective of this study is to address the complex task of identifying optimal locations for mountain hiking sites in the Eastern High Atlas region of Morocco, considering topographical factors. The study assesses the effectiveness of a commonly used machine learning classifier (MLC) in mapping potential mountain hiking areas, which is crucial for promoting and enhancing tourism in the area. To begin with, an extensive inventory of 120 mountain hiking sites was conducted, and precise measurements of three topographical parameters were collected at each site. Subsequently, a machine learning algorithm called Bagging was employed to develop a predictive model. The model achieved a high performance, with an area under the curve (AUC) value of 0.93. The model effectively identified favorable areas, encompassing around 24% of the study region, which were predominantly located in the western part. These areas were characterized by mountainous terrain, shorter slopes, and higher altitudes. The research findings provide valuable guidance to decision-makers, offering a roadmap to enhance the discovery of mountain hiking sites in the region.

Author 1: Lahbib Naimi
Author 2: Charaf Ouaddi
Author 3: Lamya Benaddi
Author 4: El Mahi Bouziane
Author 5: Abdeslam Jakimi
Author 6: Mohamed Manaouch

Keywords: Machine learning; mountain hiking; AI-based tourism; GIS; remote sensing; tourism; bagging algorithm; decision-making

PDF

Paper 100: Pioneering Granularity: Advancing Native Language Identification in Ultra-Short EAP Texts

Abstract: This study addresses the challenge of Native Language Identification (NLI) in ultra-short English for Academic Purposes (EAP) texts by proposing an innovative two-stage recognition method. Conventional views suggest that ultra-short texts lack sufficient linguistic features for effective NLI. However, we have found that even in such brief texts, subtle linguistic cues—such as syntactic structures, lexical choices, and grammatical errors—can still reveal the author’s native language background. Our approach involves fine-tuning the granularity of first language (L1) labels and refining deep learning models to more accurately capture the subtle differences in second language (L2) English texts written by individuals from similar cultural backgrounds. To validate the effectiveness of this method, we designed and conducted a series of scientific experiments using advanced Natural Language Processing (NLP) techniques. The results demonstrate that models adjusted for granular L1 distinctions exhibit greater sensitivity and accuracy in identifying language variations caused by nuanced cultural differences. Furthermore, this method is not only applicable to ultra-short texts but can also be extended to texts of varying lengths, offering new perspectives and tools for handling diverse language inputs. By integrating in-depth linguistic analysis with advanced computational techniques, our research opens up new possibilities for enhancing the performance and adaptability of NLI models in complex linguistic environments. It also provides fresh insights for future efforts aimed at optimizing the capture of linguistic features.

Author 1: Zhendong Du
Author 2: Kenji Hashimoto

Keywords: Native language identification; English for academic purposes; natural language processing

PDF

Paper 101: Feature Creation to Enhance Explainability and Predictability of ML Models Using XAI

Abstract: Bringing more transparency to the decision making process in fields deploying ML tools is important in various fields. ML tools need to be designed in such a way that they are more understandable and explainable to end users while bringing trust. The field of XAI, although a mature area of research, is increasingly being seen as a solution to address these missing aspects of ML systems. In this paper, we focus on transparency issues when using ML tools in the decision making process in general, and specifically while recruiting candidates to high-profile positions. In the field of software development, it is important to correctly identify and differentiate highly skilled developers from developers who are adept at only performing regular and mundane programming jobs. If AI is used in the decision process, HR recruiting agents need to justify to their managers why certain candidates were selected and why some were rejected. Online Judges (OJ) are increasingly being used for developer recruitment across various levels attracting thousands of candidates. Automating this decision-making process using ML tools can bring speed while mitigating bias in the selection process. However, the raw and huge dataset available on the OJs need to be well curated and enhanced to make the decision process accurate and explainable. To address this, we built and subsequently enhanced a ML regressor model and the underlying dataset using XAI tools. We evaluated the model to show how XAI can be actively and iteratively used during pre-deployment stage to improve the quality of the dataset and to improve the prediction accuracy of the regression model. We show how these iterative changes helped improve the r2-score of the GradientRegressor model used in our experiments from 0.3507 to 0.9834 (an improvement of 63.27%). We also show how the explainability of LIME and SHAP tools were increased using these steps. A unique contribution of this work is the application of XAI to a very niche area in recruitment, i.e. in the evaluation of performance of users on OJs in software developer recruitment.

Author 1: Waseem Ahmed

Keywords: XAI; ML; AI; Recruitment

PDF

Paper 102: Secret Sharing as a Defense Mechanism for Ransomware in Cloud Storage Systems

Abstract: Ransomware is a prevalent and highly destructive type of malware that has increasingly targeted cloud storage systems, leading to significant data loss and financial damage. Conventional security mechanisms, such as firewalls, antivirus software, and backups, have proven inadequate in preventing ransomware attacks, highlighting the need for more robust solutions. This paper proposes the use of Secret Sharing Schemes (SSS) as a defense mechanism to safeguard cloud storage systems from ransomware threats. Secret sharing works by splitting data into several encrypted shares, which are stored across different locations. This ensures that even if some shares are compromised, the original data remains recoverable, providing both security and redundancy. We conducted a comprehensive review of existing secret sharing schemes and evaluated their suitability for cloud storage protection. Building on this analysis, we proposed a novel framework that integrates secret sharing with cloud storage systems to enhance their resilience against ransomware attacks. The framework was tested through simulations and theoretical evaluations, which demonstrated its effectiveness in preventing data loss, even in the event of partial compromise. Our findings show that secret sharing can significantly improve the reliability and security of cloud storage systems, minimizing the impact of ransomware by allowing data to be reconstructed without paying a ransom. The proposed solution also offers scalability and flexibility, making it adaptable to different cloud storage environments. This research provides a valuable contribution to the field of cloud security, offering a new layer of protection against the growing threat of ransomware.

Author 1: Shuaib A Wadho
Author 2: Sijjad Ali
Author 3: Asma Ahmed A. Mohammed
Author 4: Aun Yichiet
Author 5: Ming Lee Gan
Author 6: Chen Kang Lee

Keywords: Ransomware; secret sharing; cloud storage; data leakage; reliability

PDF

Paper 103: Core Scheduler Task Duplication for Multicore Multiprocessor System

Abstract: The increasing complexity of multi-core multiprocessor systems presents significant challenges in task scheduling. The scheduling of tasks across multiple cores remains a significant challenge due to its NP-complete nature, especially with the in-creasing complexity of multi-core / multi-processors architectures. This paper focuses on Multi-Core Oriented (MCO) scheduling algorithms, which specifically target multi-core multi-processor systems. This paper proposes a novel scheduling algorithm, Core Scheduler Task Duplication (CSD), specifically designed for multi-core multi-processors environment. The CSD algorithm combines static and dynamic task prioritization to enhance processor utilization and performance. The proposed algorithm clusters related tasks to the same cores to improve efficiency and reduce execution time. By leveraging task duplication, the proposed algorithm improves processor utilization and reduces task waiting times. To evaluate the CSD algorithm’s performance, the algorithm was implemented and compared against the Modified Critical Path (MCP) scheduling algorithm. A series of experimental tests were conducted on diverse task sets, varying in size and complexity. Simulation results demonstrate that CSD outperforms existing compared approaches in task scheduling and processor utilization, making it a promising solution for multi-core systems.

Author 1: Aya A. Eladgham
Author 2: Nesreen I. Ziedan
Author 3: Ibrahim Ziedan

Keywords: MultiCore; multiprocessor; DAG scheduling; dynamic priority; task duplication; clustering; MCP

PDF

Paper 104: Enhancing Skin Cancer Detection with Transfer Learning and Vision Transformers

Abstract: Early and accurate detection of skin cancer is critical for effective treatment. This research aims to enhance skin cancer multi-class classification using transfer learning and Vision Transformers (ViTs), addressing the challenges of imbalanced medical imaging data. We introduced data augmentation techniques to the HAM10000 dataset to enhance the diversity of the training and implemented 13 pre-trained transfer learning models. These included DenseNet (121, 169, and 201), ResNet (50V2, 101V2, and 152V2), VGG (16 and 19), NasNet (mobile and large), InceptionV3, MobileNetV2, and InceptionResNetV2, as well as two Vision Transformer architectures (ViT and deepViT). After fine-tuning these models, DenseNet121 achieved the highest accuracy of 94%, while deepViT reached 92%, highlighting the effectiveness of these approaches in skin cancer detection. Future work will focus on refining these models, exploring hybrid approaches that combine convolutional neural networks and transformers, and expanding the framework to other cancer types to advance automated diagnostic tools in dermatology.

Author 1: Istiak Ahmad
Author 2: Bassma Saleh Alsulami
Author 3: Fahad Alqurashi

Keywords: Medical imaging; skin cancer; multi-class classification; detection; deep learning; transfer learning; vision transformer

PDF

Paper 105: Accurate Head Pose Estimation-Based SO(3) and Orientation Tokens for Driver Distraction Detection

Abstract: Driver distraction is an important cause of traffic accidents. By identifying and analyzing the driver’s head posture through monitor images, the driver’s mental state can be effectively judged, and early warnings or reminders can be given to reduce traffic accidents. We propose a novel dual-branch network named TokenFOE that combines Convolutional Neural Networks (CNN) and Transformer. The CNN branch uses an Multilayer Perceptron (MLP) to infer the image features from the backbone, then generating a rotation matrix based on SO(3) to represent head posture. The Dimension Adaptive Transformer branch uses learnable tokens to represent the head orientation of 9 categories. Integrate the losses of both branches for training, ultimately obtaining accurate head pose estimation results. The training dataset uses 300W-LP, and the quantatitive testing datasets are AFLW-2000 and BIWI. The experiment results show that the Mean Absolute Error is improved by 21.2% and 9.4% compared to the original SOTA model on the two datasets, and the Mean Absolute Error of Vectors is improved by 19.2% and 10.2%, respectively. Based on the model output and calibrated through the camera adapter module, we present the qualitative results on the largest driver distraction detection dataset currently available, the 100-driver dataset, robust and accurate detection results were achieved for four different camera perspectives in two modalities, RGB and Near Infrared. Additionally, the ablation study shows that the model inference speed (21 to 75fps) can be used for real-time detection.

Author 1: Xiong Zhao
Author 2: Sarina Sulaiman
Author 3: Wong Yee Leng

Keywords: Head pose; driver distraction detection; rotation matrix; token; transformer

PDF

Paper 106: Predicting the Most Suitable Delivery Method for Pregnant Women by Using the KGC Ensemble Algorithm in Machine Learning

Abstract: Maternal and neonatal mortality rates pose a significant challenge in healthcare systems worldwide. Predicting the childbirth approach is essential for safeguarding the mother’s and child’s well-being. Currently, it is dependent on the judgment of the attending obstetrician. However, selecting the incorrect delivery method can cause serious health complications both in mother and child over short-time and long-time. This research harnesses machine learning algorithms’ capability to automate the delivery method prediction process. This research studied two different stackings implemented in machine learning, leveraging a dataset of 6157 electronic health records and a minimal feature set. Stack1 consisted of k-nearest neighbors, decision trees, random forest, and support vector machine methods, yielding an F1-score of 95.67%. Stack 2 consisted of Gradient Boosting, k-nearest neighbors, and CatBoost methods, which yielded 98.84%. This highlights the superior effectiveness of its integrated methodologies. This research enables obstetricians to ascertain the delivery method promptly and initiate essential measures to ensure the mother’s and baby’s safety and well-being.

Author 1: Pusarla Sindhu
Author 2: Parasana Sankara Rao

Keywords: Delivery method; stacking; neonatal mortality; KGC ensemble algorithm

PDF

Paper 107: MH-LViT: Multi-path Hybrid Lightweight ViT Models with Enhancement Training

Abstract: Vision Transformers (ViTs) have become increasingly popular in various vision tasks. However, it also becomes challenging to adapt them to applications where computation resources are very limited. To this end, we propose a novel multi-path hybrid architecture and develop a series of lightweight ViT (MH-LViT) models to balance well performance and complexity. Specifically, a triple-path architecture is exploited to facilitate feature representation learning that divides and shuffles image features in channels following a feature scale balancing strategy. In the first path ViTs are utilized to extract global features while in the second path CNNs are introduced to focus more on local features extraction. The third path completes the representation learning with a residual connection. Based on the developed lightweight models, a novel knowledge distillation framework IntPNKD (Normalized Knowledge Distillation with Intermediate Layer Prediction Alignment) is proposed to enhance their representation ability, and in the meanwhile, an additional Mixup regularization term is introduced to further improve their generalization ability. Experimental results on benchmark datasets show that, with the multi-path architecture, the developed lightweight models perform well by utilizing existing CNN and ViT components, and with the proposed model enhancement training methods, the resultant models outperform notably their competitors. For example, on dataset miniImageNet, our MH-LViT M3 improves the top-1 accuracy by 4.43% and runs 4x faster on GPU, compared with EdgeViT-S; on dataset CIFA10, our MH-LViT M1 improves the top-1 accuracy by 1.24% and the enhanced version MH-LViT M1* by 2.28%, compared to the recent model EfficientViT M1.

Author 1: Yating Li
Author 2: Wenwu He
Author 3: Shuli Xing
Author 4: Hengliang Zhu

Keywords: Multi-path hybrid; lightweight ViT; normalized knowledge distillation; Mixup regularization

PDF

Paper 108: Enhanced Fish Species Detection and Classification Using a Novel Deep Learning Approach

Abstract: This study presents an innovative deep learning approach for accurate fish species detection and classification in underwater environments. We introduce FishNet, a novel convolutional neural network architecture that combines attention mechanisms, transfer learning, and data augmentation techniques to improve fish recognition in challenging aquatic conditions. Our method was evaluated on the Fish4Knowledge dataset, achieving a mean average precision (mAP) of 92.3% for detection and 89.7%accuracy for species classification, outperforming existing state-of-the-art models. The proposed approach demonstrates robust performance across various underwater conditions, including different lighting, turbidity, and occlusion scenarios, making it suitable for real-world applications in marine biology, fisheries management, and ecological monitoring.

Author 1: Musab Iqtait
Author 2: Marwan Harb Alqaryouti
Author 3: Ala Eddin Sadeq
Author 4: Ahmad Aburomman
Author 5: Mahmoud Baniata
Author 6: Zaid Mustafa
Author 7: Huah Yong Chan

Keywords: Deep learning; Fish4Knowledge; classification

PDF

Paper 109: Reducing Traffic Congestion Using Real-Time Traffic Monitoring with YOLOv8

Abstract: The voluminous number of vehicles present on principal roads together with ongoing road expansion projects are triggering serious roadblocks during peak hours in many places in Mauritius. Consequently, an innovative solution has been proposed using the strength of deep learning neural networks and cutting-edge computer vision methodologies to help reduce this problem. The idea is to create a reliable system that is adequate to measure traffic density and traffic flow on important roads of Mauritius in real-time. A dataset of 2800 frames was collected and used to train and test the YOLO models. A setup was designed for detecting, tracking and counting vehicles such as buses, cars, motorbikes, trucks and vans. Relevant traffic information from videos can also be retrieved to generate statistics for traffic density. Moreover, the system can estimate individual speed of vehicles as well as determining traffic flow on bidirectional roads. The overall mean counting accuracy was 96.1% and the overall mean classification accuracy was 94.4%. For traffic flow, the overall mean accuracy was 93.9%, while traffic density was estimated with an overall mean accuracy of 95.3%. In comparison with manual approaches used in Mauritius to understand the state of traffic, the proposed system is a modern, low-cost and effective solution that can adopted to potentially reduce traffic congestions and traffic accidents.

Author 1: Sameerchand Pudaruth
Author 2: Irfaan Mohammad Boodhun
Author 3: Choo Wou Onn

Keywords: Computer vision; deep learning; vehicle detection and tracking; traffic accidents; traffic congestion

PDF

Paper 110: Enhancing Credit Card Fraud Detection Using a Stacking Model Approach and Hyperparameter Optimization

Abstract: Credit card fraud detection has emerged as a crucial area of study, especially with the rise in online transactions coupled with increased financial losses from fraudulent activities. In this regard, a refined framework for identifying credit card fraud is introduced, utilizing a stacking ensemble model along with hyperparameter optimization. This paper integrates three highly effective algorithms—XGBoost, CatBoost, and Light-GBM—into a single strategy to improve predictive performance and address the issue of unbalanced datasets. To enable a more efficient search and adjustment of model parameters, Bayesian Optimization is employed for hyperparameter tuning. The proposed approach has been tested on a publicly accessible dataset. Results indicate notable enhancements over established baseline models in essential performance metrics, including ROC-AUC, precision, and recall. This method, while effective in fraud detection, holds significant promise for other fields focused on identifying rare occurrences.

Author 1: El Bazi Abdelghafour
Author 2: Chrayah Mohamed
Author 3: Aknin Noura
Author 4: Bouzidi Abdelhamid

Keywords: Credit card fraud detection; stacking models; hyperparameter tuning; logistic regression; ensemble learning

PDF

Paper 111: Towards Interpretable Diabetic Retinopathy Detection: Combining Multi-CNN Models with Grad-CAM

Abstract: Diabetic retinopathy (DR) is a leading cause of vision impairment and blindness, necessitating accurate and early detection to prevent severe outcomes. This paper discusses the utility of ensemble learning methodologies in enhancing the prediction accuracy of Diabetic Retinopathy detection from retinal images and the prospective utilization of Gradient-weighted Class Activation Mapping (Grad-CAM) to maximize model interpretability. Using a dataset of 1,437 color fundus images, we explored the potential of different pre-trained convolutional neural networks (CNNs), including Xception, VGG16, InceptionV3, and DenseNet121. Their respective accuracies on the test set were 89.27%, 91.44%, 89.06%, and 93.35%. Our objective was to improve the accuracy of diabetic retinopathy detection. We explored methods to combine predictions from these four models we began with weighted voting, which achieved an accuracy of 93.95%, and subsequently employed meta-learners, achieving an improved accuracy of 94.63%. These approaches surpassed individual models in distinguishing between non-proliferative and proliferative phases of DR. These findings underscore the potential of these approaches in developing robust diagnostic tools for diabetic retinopathy. Furthermore, techniques like Grad-CAM enhance interpretability, opening the door for further advancements in early-stage detection and clinical integration automatically while maximising accuracy and interpretability.

Author 1: Zakaria Said
Author 2: Fatima-Ezzahraa Ben-Bouazza
Author 3: Mounir Mekkour

Keywords: Diabetic retinopathy; retinal images; Grad-CAM; weighted voting; meta-learners

PDF

Paper 112: Breast Tumor Classification Using Dynamic Ultrasound Sequence Pooling and Deep Transformer Features

Abstract: Breast ultrasound (BUS) imaging is widely utilized for detecting breast cancer, one of the most life-threatening cancers affecting women. Computer-aided diagnosis (CAD) systems can assist radiologists in diagnosing breast cancer; however, the performance of these systems can be degrade by speckle noise, artifacts, and low contrast in BUS images. In this paper, we propose a novel method for breast tumor classification based on the dynamic pooling of BUS sequences. Specifically, we introduce a weighted dynamic pooling approach that models the temporal evolution of breast tissues in BUS sequences, thereby reducing the impact of noise and artifacts. The dynamic pooling weights are determined using image quality metrics such as blurriness and brightness. The pooled BUS sequence is then input into an efficient hybrid vision transformer-CNN network, which is trained to classify breast tumors as benign or malignant. Extensive experiments and comparisons on BUS sequences demonstrate the effectiveness of the proposed method, achieving an accuracy of 93.78%, and outperforming existing methods. The proposed method has the potential to enhance breast cancer diagnosis and contribute to lowering the mortality rate.

Author 1: Mohamed A Hassanien
Author 2: Vivek Kumar Singh
Author 3: Mohamed Abdel-Nasser
Author 4: Domenec Puig

Keywords: Breast ultrasound; breast cancer; CAD systems; deep learning; vision transformer

PDF

Paper 113: Classification of Moroccan Legal and Legislative Texts Using Machine Learning Models

Abstract: Artificial intelligence tools have revolutionized many fields, bringing significant progress in automating tasks and solving complex problems. In this article, we focus on the legal domain, where the data to be processed are specific and in large quantities. Our study consists in carrying out an automatic classification of Moroccan legal and legislative texts in Arabic. In addition, we will conduct a series of experiments to evaluate the impact of stemming, class imbalance and the impact of data quantity on the performance of the models used. Given the specificity of the Arabic language, we used Natural Language Processing (NLP) tools adapted to this language. For classification, we worked with the following models: Support Vector Machine (SVM), Random Forests (RF), K Nearest Neighbors (KNN) and Naive Bayes (NB). The results obtained are very impressive, and the comparison of model outputs enriches the debate on specificities of each model.

Author 1: Amina BOUHOUCHE
Author 2: Mustapha ESGHIR
Author 3: Mohammed ERRACHID

Keywords: Classification Arabic text; natural language processing; legal data; machine learning

PDF

Paper 114: ERCO-Net: Enhancing Image Dehazing for Optimized Detail Retention

Abstract: Image dehazing is a crucial preprocessing step in computer vision for enhancing image quality and enabling many downstream applications. However, existing methods often do not accurately restore hazy images while maintaining computational efficiency. To overcome this challenge, we propose ERCO-Net a new fusion framework that combines edge restriction and contextual optimization methods. By using boundary constraints, ERCO-Net extend the boundaries that help in protecting the edges and structures of an image. Contextual optimization impacts the final quality of the dehazed image by enhancing smoothness and coherence. We compare ERCO-Net with conventional approaches such as dark channel prior (DCP), All-in-one dehazing network (AoD), and Feature fusion attention network (FFA-Net). The comparative evaluation highlights the effectiveness of the proposed fusion method, providing significant improvement in image clarity, contrast, and colors. The combination of edge restriction and contextual optimization not only enhances the quality of dehazing but also decreases computational complexity, presenting a promising avenue for advancing image restoration techniques. The source code is available at https://github.com/FatimaAyub12/Image-Dehazing-.

Author 1: Muhammad Ayub Sabir
Author 2: Fatima Ashraf
Author 3: Ahthasham Sajid
Author 4: Nisreen Innab
Author 5: Reem Alrowili
Author 6: Yazeed Yasin

Keywords: Image dehazing; edge restriction; contextual optimization; transmission map estimation; haze removal

PDF

Paper 115: Optimizing Text Summarization with Sentence Clustering and Natural Language Processing

Abstract: Text summarization is an important task in natural language processing (NLP), with significant implications for information retrieval and content management. Traditional summarization methods often struggle with issues like redundancy, loss of key information, and inability to capture the underlying semantic structure of the text. This paper addresses these challenges by presenting an advanced approach to extractive summarization, which integrates clustering-based sentence selection with the BART model. The proposed method tackles the problem of redundancy by using Term Frequency-Inverse Document Frequency (TF-IDF) for feature extraction, followed by K-means clustering to group similar sentences. This clustering step is designed to reduce redundancy by ensuring that each cluster represents a distinct theme or topic. Representative sentences are then selected from these clusters based on their cosine similarity to a user query, which helps in retaining the most relevant information. These selected sentences are then fed into the BART model to generate the final abstractive summary. This combination of extractive and abstractive techniques addresses the common problem of information loss, ensuring that the summary is both comprehensive and coherent. The approach is evaluated using the CNN/DailyMail and XSum datasets, which are widely recognized benchmarks in the summarization domain. Results assessed through ROUGE metrics demonstrate that the proposed model substantially improves summarization quality compared to existing benchmarks.

Author 1: Zahir Edress
Author 2: Yasin Ortakci

Keywords: Abstractive summarization; extractive summarization; sentence clustering; language understanding; information retrieval

PDF

Paper 116: Hiding Encrypted Images in Audios Based on Cellular Automatas and Discrete Fourier Transform

Abstract: With the increasing need for secure long-distance communication, protecting sensitive information such as images during transmission remains a significant challenge. This paper proposes a new method for hiding encrypted images inside audio files by integrating Cellular Automata (CA) and the Discrete Fourier Transform (DFT). The primary aim is to enable secure transmission of large encrypted images without altering the audio’s perceptual quality. The scheme leverages the crypto-graphic properties of CA to generate encrypted images, which are then embedded into inaudible frequencies of audio using DFT. Results show that this method successfully hides and recovers images of considerable size, maintaining bit-level integrity of the original images while preserving audio quality. However, the scheme lacks resilience to signal processing attacks, such as compression or filtering, the resulting size of the audio is also bigger. Despite this limitations, the method provides a competitive advantage in payload capacity and efficiency, making it suitable for applications where the transmission of large, sensitive data is necessary but not subject to aggressive signal attacks.

Author 1: Jose Alva Cornejo
Author 2: Esdras D. Vasquez
Author 3: Jose Calizaya Quispe
Author 4: Roxana Flores-Quispe
Author 5: Yuber Velazco-Paredes

Keywords: Cellular automaton; Fourier Transform; cryptography; synchronization; stenography; embedding

PDF

Paper 117: DBPF: An Efficient Dynamic Block Propagation Framework for Blockchain Networks

Abstract: Scalability poses a significant challenge in blockchain networks, particularly in optimizing the propagation time of new blocks. This paper introduces an approach, termed “DBPF” - Dynamic Block Propagation Framework for Blockchain Networks, aimed at addressing this challenge. The approach focuses on optimizing neighbor selection during block propagation to mitigate redundancy and enhance network efficiency. By employing informed neighbor selection and leveraging the Brotli lossless compression algorithm to reduce block size, the objective is to optimize network bandwidth and minimize transmission time. The DBPF framework calculates the Minimum Spanning Tree (MST) to ensure efficient communication paths between nodes, while the Brotli compression algorithm reduces the block size to optimize network bandwidth. The core objective of DBPF is to streamline the propagation process by selecting optimal neighbors and eliminating unnecessary data redundancy. Through experimentation and simulation of the block propagation process using(DBPF), we demonstrate a significant reduction in the propagation time of new blocks compared to traditional methods. Comparisons against approaches such as selecting neighbors with the least Round-Trip Time RTT, random neighbor selection, and the DONS approach reveal a notable decrease in propagation time up to more than ( 45%) compared to them based on network type and number of nodes. The effectiveness of (DBPF) in boosting blockchain network efficiency and decreasing propagation time is emphasized by the experimental findings. Additionally, various compression algorithms such as zstandard and zlib were tested during the research. Nevertheless, the results suggest that Brotli produced the most positive outcomes. Through the integration of optimized neighbor selection and effective data compression, DBPF presents a hopeful resolution to the scalability issues confronting blockchain networks. These results showcase the capability of (DBPF) to notably enhance network performance, leading the path toward smoother and more efficient blockchain operations.

Author 1: Osama Farouk
Author 2: Mahmoud Bakrey
Author 3: Mohamed Abdallah

Keywords: Blockchain; scalability; minimum spanning tree; compression; broadcasting; optimized neighbor selection; network bandwidth; transmission time optimization

PDF

Paper 118: Skin Diseases Classification with Machine Learning and Deep Learning Techniques: A Systematic Review

Abstract: Skin cancer is one of the most prevalent types of cancer worldwide, and its early detection is crucial for improving patient outcomes. Artificial Intelligence (AI) has shown significant promise in assisting dermatologists with accurate and efficient diagnosis through automated skin disease classification. This systematic review aims to provide a comprehensive overview of the various AI techniques employed for skin disease classification, focusing on their effectiveness across different datasets and methodologies. A total of 220 articles were initially identified from databases such as Scopus and IEEE Xplore. After removing duplicates and conducting a title and abstract screening, 213 studies were assessed for eligibility based on predefined criteria such as study relevance, clarity of results, and innovative AI approaches. Following full-text review, 56 studies were included in the final analysis. These studies were categorized based on the AI techniques used, including Convolutional Neural Networks (CNNs), Transformer-based models, hybrid models combining CNNs with other techniques, Generative Adversarial Networks (GANs), and ensemble learning approaches. The review high-lights that the ISIC dataset and its variations are the most commonly used data sources, owing to their extensive and diverse collection of dermoscopic images. The results indicate that CNN-based models remain the most widely adopted and effective approach for skin disease classification, with several hybrid and Transformer-based models also demonstrating high accuracy and specificity. Despite the advancements, challenges such as dataset variability, the need for more diverse training data, and the lack of interpretability in AI models persist. This review provides insights into current trends and identifies future directions for research, emphasizing the importance of integrating AI into clinical practice for improved skin disease management.

Author 1: Amina Aboulmira
Author 2: Hamid Hrimech
Author 3: Mohamed Lachgar

Keywords: Skin Disease Classification; Artificial Intelligence (AI); Convolutional Neural Networks (CNNs); Transformer-based Models; Generative Adversarial Networks (GANs); ensemble learning; hybrid models; ISIC dataset; dermatology; machine learning; deep learning; skin cancer detection; dermoscopic images; medical imaging; systematic review

PDF

Paper 119: An Investigation into the Risk Factors of Forest Fires and the Efficacy of Machine Learning Techniques for Early Detection

Abstract: Forest fires are a major environmental hazard that can have significant impacts on human lives. Early detection and swift action are crucial for controlling such situations and minimizing damage. However, the automatic tools based on local sensors in meteorological stations are often insufficient for detecting fires immediately. Machine learning offers a promising solution to forecast forest fires and reduce their rapid spread. In recent state-of-the-art solutions, only one or two techniques have been utilized for prediction. In this research, we investigate several methods for forest fire area prediction, including Long Short Term Memory (LSTM), Auto Regressive Integrated Moving Average (ARIMA), and Support Vector Regression (SVR). Our aim is to identify the most effective and optimal method for predicting forest fires. After comparing our results with other artificial intelligence and machine learning techniques applied to the same dataset, we found that the LSTM approach outperforms the ARIMA and SVR predictors by more than 92%. Our findings also indicate that the LSTM algorithm has a lower estimation error when compared to other predictors, thus providing more accurate forecasts.

Author 1: Asma Cherif
Author 2: Sara Chaudhry
Author 3: Sabina Akhtar

Keywords: Machine Learning; Forest Fire; LSTM; ARIMA; SVR

PDF

Paper 120: Revolutionizing Historical Document Digitization: LSTM-Enhanced OCR for Arabic Handwritten Manuscripts

Abstract: Optical Character Recognition (OCR) holds immense practical value in the realm of hand-written document analysis, given its widespread use in various human transactions. This scientific process enables the conversion of diverse documents or images into analyzable, editable, and searchable data. In this paper, we present a novel approach that combines transfer learning and Arabic OCR technology to digitize ancient handwritten scripts. Our method aims to preserve and enhance accessibility to extensive collections of historically significant materials, including fragile manuscripts and rare books. Through a comprehensive examination of the challenges encountered in digitizing Arabic handwritten texts, we propose a transfer learning-based framework that leverages pre-trained models to overcome the scarcity of labeled data for training OCR systems. The experimental results demonstrate a remarkable improvement in the recognition accuracy of Arabic handwritten texts, thereby offering a highly promising solution for the digitization of historical documents. Our work enables the digitization of large collections of ancient historical materials, including manuscripts and rare books characterized by delicate physical conditions. The proposed approach signifies a significant step towards preserving our cultural heritage and facilitating advanced research in historical document analysis.

Author 1: Safiullah Faizullah
Author 2: Muhammad Sohaib Ayub
Author 3: Turki Alghamdi
Author 4: Toqeer Syed Ali
Author 5: Muhammad Asad Khan
Author 6: Emad Nabil

Keywords: Optical character recognition; transfer learning; Arabic OCR; image processing; classification; convolutional neural network

PDF

Paper 121: A Secure Scheme to Counter the Man in the Middle Attacks in SDN Networks-Based Domain Name System

Abstract: Internet and computer networks are vulnerable to cyber-attacks which compromise the services they provide to facilitate the management of data and users. The domain name system (DNS) is the Internet service that translates domain names and computer IP addresses and IPs to domain names. DNS is sometimes a victim of attacks that are difficult to detect and prevent because they are not only very stealthy but also conceal its proper functioning. Among the attacks that DNS is subject to, there are man-in-the-middle (MITM) attacks. Traditional networks that centralize all network functions in a single device complicate the detection and protection of systems against these attacks challenging. Software-defined networking (SDN) is a technology that is widely used to address many traditional network problems such as security and network architectures. Therefore, in this paper, we propose a scheme designed to detect and block man-in-the-middle attacks based on a newly defined architecture. The effectiveness of our secured solution is evaluated in an SDN architecture where an Address Resolution Protocol spoofing MITM attack is generated for the evaluation purpose. The results of our simulations show that we can effectively detect the attack and the performance evaluation of our approach shows that the proposed solution is effective in terms of security, implementation cost and resource consumption. We then recommend the use of our proposed solution to address the MITM attacks in SDN networks-based Domain Name System.

Author 1: Frank Manuel Vuide Pangop
Author 2: Miguel Landry Foko Sindjoung
Author 3: Mthulisi Velempini

Keywords: Cyber security; domain name system; man in the middle attack; software defined networking

PDF

Paper 122: Efficient Load-Balancing and Container Deployment for Enhancing Latency in an Edge Computing-Based IoT Network Using Kubernetes for Orchestration

Abstract: Edge Computing (EC) provides computational and storage resources close to data-generating devices, and reduces end-to-end latency for communications between end-devices and the remote servers. In smart cities (SC) for example, thousands of applications are running on edge servers, and it becomes crucial to manage resource allocation and load balancing to improve data transmission throughput and reduce latency. Kubernetes (k8s) is a widely used container orchestration platform that is commonly employed for the efficient management of containerized applications in SC. However, it does not integrate well with certain EC requirements such as network-related metrics and the heterogeneity of EC clusters. Furthermore, requests are equally distributed across all replicas of an application, which may increase the time taken for processing, since in the EC environment, nodes are geographically dispersed. Several existing studies have investigated this problem, unfortunately, the proposed solutions consume a lot of node’s resources in the cluster. To the best of our knowledge, none of studies considered the cluster heterogeneity when deploying applications that have different resource requirements. To address this issue, this paper proposes a new technique to deploy applications on edge servers by extending Kubernetes scheduler, and an approach to manage requests among the different nodes. The simulation results show that our solution generates better results than some of the state-of-the-art works in terms of latency.

Author 1: Garrik Brel Jagho Mdemaya
Author 2: Milliam Maxime Zekeng Ndadji
Author 3: Miguel Landry Foko Sindjoung
Author 4: Mthulisi Velempini

Keywords: Latency; Kubernetes; edge computing; Internet of Things; load-balancing

PDF

Paper 123: Advanced Techniques for Optimizing Demand-Side Management in Microgrids Through Load-Based Strategies

Abstract: Microgrids are crucial for ensuring reliable electricity in remote areas, but integrating renewable sources like photovoltaic (PV) systems presents challenges due to supply intermittency and demand fluctuations. Demand-side management (DSM) addresses these issues by adjusting consumption patterns. This article explores a DSM strategy combining load shifting (shifting demand to periods of high PV generation), peak clipping (limiting maximum load), and valley filling (redistributing load during low-demand periods). Implemented in MATLAB and tested on a PV-battery microgrid, the strategy significantly reduces peak demand, improves the peak-to-average demand ratio (PAR), and enhances system stability and flexibility, particularly with the inclusion of deferrable loads.

Author 1: Ramia Ouederni
Author 2: Bechir Bouaziz
Author 3: Faouzi Bacha

Keywords: Demand side management; microgrid; load shifting; peak clipping; valley fill

PDF

Paper 124: Ensemble of Weighted Code Mixed Feature Engineering and Machine Learning-Based Multiclass Classification for Enhanced Opinion Mining on Unstructured Data

Abstract: There is an exponential growth of opinions on online platforms, and the rapid rise in communication technologies generates a significant need to analyze opinions in online social networks (OSN). However, these opinions are unstructured, rendering knowledge extraction from opinions complex and challenging to implement. Although existing opinions mining systems are applied in several applications, limited research is available to handle code-mixed opinions of a non-structured nature where there is a switching of lexicons in languages within a single opinion structure. The challenge lies in interpreting complex opinions in multimedia networks owing to their unstructured nature, volume, and lexical structure. This paper presents a novel ensemble approach using machine learning and natural language processing to interpret code mixed opinions efficiently. Firstly, the opinions are extracted from the input corpus and preprocessed using proposed Extended Feature Vectors (EFV). Subsequently, the opinion mining system is implemented using a novel approach using weighted code mixed opinion mining framework (WCM-OMF) for multiclass classification. The proposed WCM-OMF model achieves an accuracy of 79.11% and 72% for the benchmark datasets, which is a significant improvement over existing Hierarchical LSTM, Random Forest, and SVM models and state-of-the-art-methods. The proposed solution can be implemented in opinion detection of other business sectors beneficial in obtaining actionable insights for efficient decision-making in enterprises and Business Intelligence (BI).

Author 1: Ruchi Sharma
Author 2: Pravin Shrinath

Keywords: Opinion mining; Machine learning; weighted ensemble; code mixed; Natural Language Processing; Business Intelligence; Online Social Networks

PDF

The Science and Information (SAI) Organization
BACK TO TOP

Computer Science Journal

  • About the Journal
  • Call for Papers
  • Submit Paper
  • Indexing

Our Conferences

  • Computing Conference
  • Intelligent Systems Conference
  • Future Technologies Conference
  • Communication Conference

Help & Support

  • Contact Us
  • About Us
  • Terms and Conditions
  • Privacy Policy

© The Science and Information (SAI) Organization Limited. All rights reserved. Registered in England and Wales. Company Number 8933205. thesai.org