The Science and Information (SAI) Organization
  • Home
  • About Us
  • Journals
  • Conferences
  • Contact Us

Publication Links

  • IJACSA
  • Author Guidelines
  • Publication Policies
  • Digital Archiving Policy
  • Promote your Publication
  • Metadata Harvesting (OAI2)

IJACSA

  • About the Journal
  • Call for Papers
  • Editorial Board
  • Author Guidelines
  • Submit your Paper
  • Current Issue
  • Archives
  • Indexing
  • Fees/ APC
  • Reviewers
  • Apply as a Reviewer

IJARAI

  • About the Journal
  • Archives
  • Indexing & Archiving

Special Issues

  • Home
  • Archives
  • Proposals
  • Guest Editors
  • SUSAI-EE 2025
  • ICONS-BA 2025
  • IoT-BLOCK 2025

Future of Information and Communication Conference (FICC)

  • Home
  • Call for Papers
  • Submit your Paper/Poster
  • Register
  • Venue
  • Contact

Computing Conference

  • Home
  • Call for Papers
  • Submit your Paper/Poster
  • Register
  • Venue
  • Contact

Intelligent Systems Conference (IntelliSys)

  • Home
  • Call for Papers
  • Submit your Paper/Poster
  • Register
  • Venue
  • Contact

Future Technologies Conference (FTC)

  • Home
  • Call for Papers
  • Submit your Paper/Poster
  • Register
  • Venue
  • Contact
  • Home
  • Call for Papers
  • Editorial Board
  • Guidelines
  • Submit
  • Current Issue
  • Archives
  • Indexing
  • Fees
  • Reviewers
  • Subscribe

IJACSA Volume 15 Issue 7

Copyright Statement: This is an open access publication licensed under a Creative Commons Attribution 4.0 International License, which permits unrestricted use, distribution, and reproduction in any medium, even commercially as long as the original work is properly cited.

View Full Issue

Paper 1: Integrating Observability with DevOps Practices in Financial Services Technologies: A Study on Enhancing Software Development and Operational Resilience

Abstract: The finance market closely depends on translation and high-quality software solutions when performing crucial transactions and processing important information and customer services. Thus, systems’ reliability and good performance become crucial when these systems become complicated. This paper aims to focus on the implementation of the observability concept with the DevOps approach in financial services technologies, where its strengths, weaknesses, opportunities, and threats are also discussed with regard to the future. The concept of observability is intertwined with DevOps since, with its help, it is possible to gain deep insights into the system’s inner state and further enhance status monitoring, detect problems in less time, and optimize performance constantly. When organized and analyzed properly, observability data can, therefore, play a critical role in increasing software quality in financial institutions, aligning with regulatory standards, and decreasing development and operations teams’ silos. However, the implementation of observability within an organization using DevOps best practices in the financial services industry has some challenges, which include The issue of security, especially when it comes to data, the Challenge of data overload, the challenging task of encouraging the right organizational culture for continuous and consistent observability. The article presents a guide that discusses how to incorporate observability with DevOps: the step-by-step process of defining observability needs, choosing the most suitable tools, integrating with other tools in the existing DevOps frameworks, laboratory of alarms, and constant enhancement. Furthermore, it considers examples of how some financial organizations have applied observability to reduce risks, improve efficacy, and enrich customers’ interactions. In addition, the article also deliberates on the future perspectives of observability, for instance, artificial intelligence and machine learning are quickly emerging as means through which different tasks of observability can be automated, and there are increasing concerns with security when it comes to the implementation of observability in the financial services industry. By adopting observability and aligning it with DevOps, financial institutions can develop and sustain sound, reliable and high-quality infrastructure and maintain the industry’s leadership.

Author 1: Ankur Mahida

Keywords: Observability; monitoring; integrated analysis; DevOPs; integration; operational resilience

PDF

Paper 2: Enhancing Administrative Source Registers for the Development of a Robust Large Language Model: A Novel Methodological Approach

Abstract: Accurate statistical information is critical for understanding, describing, and managing socio-economic systems. While data availability has increased, often it does not meet the quality requirements for effective governance. Administrative registers are crucial for statistical information production, but their potential is hampered by quality issues stemming from administrative inconsistencies. This paper explores the integration of semantic technologies, including ontologies and knowledge graphs, with administrative databases to improve data quality. We discuss the development of large language models (LLMs) that enable a robust, queryable framework, facilitating the integration of disparate data sources. This approach ensures high-quality administrative data, essential for statistical reuse and the development of comprehensive, dynamic knowledge graphs and LLMs tailored for administrative applications.

Author 1: Adham Kahlawi
Author 2: Cristina Martelli

Keywords: Statistical information systems; administrative data reuse; ontology; database; semantic web; knowledge graph; LLM

PDF

Paper 3: Enhancing Healthcare: Machine Learning for Diabetes Prediction and Retinopathy Risk Evaluation

Abstract: Diabetes mellitus stands as a major public health issue that affects millions globally. Among the various complications associated with diabetes, diabetic retinopathy presents a significant concern, affecting approximately one-third of diabetic patients. Early detection of diabetic retinopathy is paramount, as timely treatment can significantly reduce the risk of severe visual impairment. The study employs advanced machine learning techniques to predict diabetes and assess risk levels for retinopathy, aiming to enhance predictive accuracy and risk stratification in clinical settings. This approach contributes to better management and treatment outcomes. A diverse array of machine learning models including Logistic Regression, Random Forest, XGBoost, voting classifiers was used. These models were applied to a meticulously selected dataset, specifically designed to include comprehensive diabetic indicators along with retinopathy outcomes, enabling a detailed comparative analysis. Among the evaluated models, XGBoost demonstrated superior performance in terms of accuracy, sensitivity, and computational efficiency. This model excelled in identifying risk levels among diabetic patients, providing a reliable tool for early detection of potential retinopathy. The findings suggest that the integration of machine learning models, particularly XGBoost, into the healthcare system could significantly enhance early screening and personalized treatment plans for diabetic retinopathy. This advancement holds the potential to improve patient outcomes through timely and accurate risk assessment, paving the way for targeted interventions.

Author 1: Ghinwa Barakat
Author 2: Samer El Hajj Hassan
Author 3: Nghia Duong-Trung
Author 4: Wiam Ramadan

Keywords: Machine learning; diabetes prediction; artificial intelligence in healthcare; XGBoost; Random Forest

PDF

Paper 4: Enhancing Audio Classification Through MFCC Feature Extraction and Data Augmentation with CNN and RNN Models

Abstract: Sound classification is a multifaceted task that necessitates the gathering and processing of vast quantities of data, as well as the construction of machine learning models that can accurately distinguish between various sounds. In our project, we implemented a novel methodology for classifying both musical instruments and environmental sounds, utilizing convolutional and recurrent neural networks. We used the Mel Frequency Cepstral Coefficient (MFCC) method to extract features from audio, which emulates the human auditory system and produces highly distinct features. Knowing how important data processing is, we implemented distinctive approaches, including a range of data augmentation and cleaning techniques, to achieve an optimized solution. The outcomes were noteworthy, as both the convolutional and recurrent neural network models achieved a commendable level of accuracy. As machine learning and deep learning continue to revolutionize image classification, it is high time to explore the development of adaptable models for audio classification. Despite the challenges associated with a small dataset, we successfully crafted our models using convolutional and recurrent neural networks. Overall, our strategy for sound classification bears significant implications for diverse domains, encompassing speech recognition, music production, and healthcare. We hold the belief that with further research and progress, our work can pave the way for breakthroughs in audio data classification and analysis.

Author 1: Karim Mohammed Rezaul
Author 2: Md. Jewel
Author 3: Md Shabiul Islam
Author 4: Kazy Noor e Alam Siddiquee
Author 5: Nick Barua
Author 6: Muhammad Azizur Rahman
Author 7: Mohammad Shan-A-Khuda
Author 8: Rejwan Bin Sulaiman
Author 9: Md Sadeque Imam Shaikh
Author 10: Md Abrar Hamim
Author 11: F.M Tanmoy
Author 12: Afraz Ul Haque
Author 13: Musarrat Saberin Nipun
Author 14: Navid Dorudian
Author 15: Amer Kareem
Author 16: Ahmmed Khondokar Farid
Author 17: Asma Mubarak
Author 18: Tajnuva Jannat
Author 19: Umme Fatema Tuj Asha

Keywords: Deep learning (artificial intelligence); data augmentation; audio segmentation; signal processing; frame blocking; fast fourier transform; discrete cosine transform; feature extraction; MFCC; CNN; RNN

PDF

Paper 5: Autonomous Robots for Transport Applications

Abstract: Even though automation of travel systems is already happening, it's important to know how the introduction of self-driving cars might change people's transportation habits because changes in these choices could have an effect on health as well as the long-term viability and efficiency of transportation systems. For this study to be useful in Australia, it had to fill in this information gap that had been seen. The people who answered gave information about their backgrounds, the ways they currently travel, the importance they thought certain aspects of transportation were, and their feelings about self-driving cars. Then, they read a story that had been shaped by the opinions of experts and that talked about a future where cars would drive themselves. After reading the story, the people who answered picked the types of transportation they would most likely use in that scenario. They used descriptive studies to look at how transport choices have changed and regression models to figure out the factors that would be used to predict how transport options will change in the future. A lot of people who answered said they wanted to use outdoor, shared, and public travel more in the future than they do now. Half as many chances were taken to use private transport. In general, better public transportation, a workable system for active transportation, and fairly cheap shared driverless cars were seen as positive changes in how people planned to use transportation in the imagined situation. In the event that politicians are able to take action to achieve these results, the autonomization of transportation is likely to result in good changes to society.

Author 1: Yang Lu

Keywords: Autonomous vehicles; transport choices; sustainability; health; physical activity; active transport; shared autonomous vehicles; private autonomous vehicles; public transport

PDF

Paper 6: A Memory-Based Neural Network Model for English to Telugu Language Translation on Different Types of Sentences

Abstract: In India, regional languages play an important role in government-to-public, public-to-citizen rights, weather forecasting and farming. Depending on the state the language also changes accordingly. But in the case of remote areas, the understanding level becomes complex since everything nowadays is presented in the English Language. In such conditions, the regional language manual translation consumes more time to provide services to the common people. The automatic translation of one language to another by maintaining the meaning of the given input sentence there by producing the exact meaning in the output language is carried out through Machine Translation. In this work, we proposed a Memory Based Neural Network for Translation (MBNNT) model on simple, compound and complex sentences for English to Telugu language translation. We used BLEU and WER metrics for identifying the translation quality. On applying these metrics over different type of sentences LSTM showed promising results over Statistical Machine Translation and Recurrent Neural Networks in terms of the quality and performance.

Author 1: Bilal Bataineh
Author 2: Bandi Vamsi
Author 3: Ali Al Bataineh
Author 4: Bhanu Prakash Doppala

Keywords: Machine translation; English-Telugu translation; RNN; LSTM

PDF

Paper 7: Hybrid Security Systems: Human and Automated Surveillance Approaches

Abstract: The study investigates the performance of hybrid security systems under different personnel training and artificial intelligence (AI) assistance conditions. The aim is to understand the system’s impact on different scenarios that involve human operators and AI and to develop a predictive model for optimizing system performance. A human security information model was built to predict the performance of hybrid security systems. The system’s performance metrics (response time, hits, misses, mistakes), cognitive load, visual discrimination, trust, and confidence were measured under different training and assistance conditions. Participants were divided into trained and non-trained groups, and each group performed surveillance tasks with and without AI assistance. Predictive modeling was performed using Linear Regression. The training significantly improved performance by reducing misses and mistakes and increasing hits, both with and without AI assistance. In the non-trained group, AI assistance boosted speed and hit accuracy but led to more mistakes. AI assessment reduced response time and misses for the trained group while increasing hits without affecting the mistake rate. Trust and confidence were higher with AI in the non-trained group, while AI reduced cognitive load in the trained group. The findings highlight the interactions between human operators, AI assistance, and training in hybrid surveillance systems. The predictive model can guide the design and implementation of these systems to optimize performance. Future studies should focus on strategies to enhance operator trust in AI-assisted systems and confidence, further optimizing the collaborative potential of hybrid surveillance frameworks.

Author 1: Mohammed Ameen
Author 2: Richard Stone
Author 3: Ulrike Genschel
Author 4: Fatima Mgaedeh

Keywords: Hybrid surveillance systems; human-AI interaction; operator training; predictive modeling; linear regression

PDF

Paper 8: Towards Secure Internet of Things-Enabled Intelligent Transportation Systems: A Comprehensive Review

Abstract: The Internet of Things (IoT) constitutes a technological evolution capable of influencing the establishment of smart cities in a wide range of fields, including transportation. Intelligent Transportation Systems (ITS) represent a prominent IoT-enabled solution designed to enhance the efficiency, safety, and sustainability of transport networks. However, integrating IoT with ITS introduces significant security challenges that need to be addressed to ensure the reliability of these systems. This research aims to critically analyze the current state of IoT-integrated ITS, identify security threats and vulnerabilities, and evaluate existing security measures to propose robust solutions. Utilizing a comprehensive review methodology that includes literature analysis and expert interviews, we identify key achievements and pinpoint critical security gaps. Our findings indicate that while substantial progress has been made in securing ITS, significant challenges remain, particularly regarding scalability, interoperability, and real-time data processing. The study proposes enhanced security protocols and methods to mitigate these risks, contributing to the development of more secure and resilient IoT-enabled ITS.

Author 1: Changxia Lu
Author 2: Fengyun Wang

Keywords: Internet of Things; intelligent transportation; security; logistics

PDF

Paper 9: Hybrid Machine Learning Models Based on CATBoost Classifier for Assessing Students' Academic Performance

Abstract: This study addresses the imperative task of predicting and evaluating students' academic performance by amalgamating qualitative and quantitative factors, crucial in light of the persisting challenges undergraduates encounter in completing their degrees. Educational institutions wield significant influence in prognosticating student outcomes, necessitating the application of data mining (DM) techniques such as classification, clustering, and regression to discern and forecast student study behaviors. Through this research, the potential of deriving demonstrates valuable insights from educational data, empowering educational stakeholders with enhanced decision-making capabilities and facilitating improved student outcomes. Employing a hybrid approach, models developed within the realm of educational DM, leveraging the CATBoost Classifier (CATC) in conjunction with two cutting-edge optimization algorithms: Victoria Amazonica Optimization (VAO) and Artificial Rabbits Optimization (ARO). Initially, the models undergo partitioning into training and testing sets for performance evaluation utilizing statistical metrics. After classifying 649 students according to their final scores, VAO outperformed ARO in terms of maximizing CATC's classification ability, resulting in an approximate 6% enhancement in accuracy and precision. Moreover, the VAO model adeptly categorizes 606 out of 649 students accurately. This research furnishes invaluable predictive models for educators, researchers, and policymakers endeavoring to enrich students' educational journeys and foster academic success.

Author 1: Ding Hao
Author 2: Yang Xiaoqi
Author 3: Qi Taoyu

Keywords: Academic performance; hybridization; CATBoost classifier; meta-heuristic algorithms; educational institutions

PDF

Paper 10: Exploring the Impact of Time Management Skills on Academic Achievement with an XGBC Model and Metaheuristic Algorithm

Abstract: Estimating a student's academic performance is a crucial aspect of learning preparation. In order to predict understudy academic performance, this consideration uses a few Machine Learning (ML) models and Time Administration Aptitudes data from the Time Structure Questionnaire (TSQ). While a number of other useful characteristics have been used to forecast academic achievement, TSQ findings, which directly evaluate students' time management skills, have never been included. ‎‎‎This oversight is surprising, as time management skills likely play a significant role in academic success. Time administration may be an ability that may impact the student's academic accomplishment. The purpose of this research is to look at the connection between college students' academic success and their ability to manage their time well.‎‎‏ The Extreme Gradient Boosting Classification (XGBC) model has been utilized in this study to forecast academic student performance. To enhance the prediction accuracy of the XGBC model, this study employed three optimizers: Giant Trevally Optimizer (GTO), Bald Eagle Search Optimization (BESO), and Seagull Optimization Algorithm (SOA). Impartial performance evaluators were employed in this study to assess the models' predictions, minimizing potential biases. The findings showcase the success of this approach in developing an accurate predictive model for student academic performance. Notably, the XGBE surpassed other models, achieving impressive accuracy and precision values of 0.920 and 0.923 during the training phase.

Author 1: Songyang Li

Keywords: Student academic performance; time management; machine learning; extreme gradient boosting classification; metaheuristic algorithm

PDF

Paper 11: Deep Hybrid Learning Approaches for COVID-19 Virus Detection Using Chest X-ray Images

Abstract: This paper introduces a novel deep learning framework for highly accurate COVID-19 detection using chest X-ray images. The proposed model tackles the challenge by combining stacked Convolutional Neural Network models for superior feature extraction to potentially enhance interpretability. The proposed model achieved a high accuracy in distinguishing COVID-19 from healthy cases. The study demonstrates the potential of deep hybrid learning for accurate COVID-19 detection, paving the way for its application in real-world settings. Future research directions could explore methods to further refine the model's capabilities. Overall, this work contributes significantly to the development of robust deep-learning methods for COVID-19 detection with the potential for broader use in medical image analysis.

Author 1: Mansor Alohali

Keywords: COVID-19 detection; deep learning; deep hybrid learning; chest X-ray analysis; machine learning classifiers; medical image analysis; convolutional networks

PDF

Paper 12: Having Deep Investigation on Predicting Unconfined Compressive Strength by Decision Tree in Hybrid and Individual Approaches

Abstract: In the field of geotechnical engineering Rocks' unconfined compressive strength (UCS) is an important variable that plays a significant part in civil engineering projects like foundation design, mining, and tunneling. These projects' stability and safety depend on how accurately UCS predicts the future. In this study, machine learning (ML) techniques are applied to forecast UCS for soil-stabilizer combinations. This study aims to build complex and highly accurate predictive models using the robust Decision Tree (DT) as a primary ML tool. These models show relationships between UCS considering a variety of intrinsic soil properties, including dispersion, plasticity, linear particle size shrinkage, and the kind of and number of stabilizing additives. Furthermore, this paper integrates two meta-heuristic algorithms: the Population-based‎ vortex search algorithm (PVS) and the Arithmetic optimizer algorithm (AOA) to enhance the precision of models. These algorithms work in tandem to bolster the accuracy of predictive models. This study has subjected models to rigorous validation by analyzing UCS samples from different soil types, drawing from historical stabilization test results. This study unveils three noteworthy models: DTAO, DTPB, and an independent DT model. Each model provides invaluable insights that support the meticulous projection of UCS for soil-stabilizer blends. Notably, the DTAO model stands out with exceptional performance metrics. With an R2 value of 0.998 and an impressively low RMSE of 1.242, it showcases precision and reliability. These findings not only underscore the accuracy of the DTAO model but also emphasize its effectiveness in predicting soil stabilization outcomes.

Author 1: Qingqing Zhang
Author 2: Lei Wang
Author 3: Hongmei Gu

Keywords: Unconfined compressive strength; machine learning; decision tree; population-based vortex search algorithm; arithmetic optimizer algorithm

PDF

Paper 13: Temporal Fusion Transformers for Enhanced Multivariate Time Series Forecasting of Indonesian Stock Prices

Abstract: The stock market represents the financial pulse of economies and is an important part of the global financial system. It allows people to buy and sell shares in publicly held corporations. It serves as a platform for investors to trade ownership in businesses, enabling companies to raise capital for expansion and operations. However, the stock market can be very risky for any investor because of the fluctuating prices and uncertainties of the market. Integrating deep learning into stock market analysis enables researchers and practitioners to gain a deeper understanding of the trends and variations that will improve investment decisions. Recent advancements in the area of deep learning, more specifically with the invention of transformer-based models, have revolutionized research in stock market prediction. The Temporal Fusion Transformer (TFT) was introduced as a model that uses self-attention mechanisms to capture complex temporal dynamics across multiple time-series sequences. This study investigates feature engineering and technical data integrated into the TFT models to improve short-term stock market prediction. The Variance Inflation Factor (VIF) was used to quantify the severity of multicollinearity in the dataset. Evaluation metrics were used to evaluate TFT models’ effectiveness in improving the accuracy of stock market forecasting compared to other transformer models and traditional statistical Naïve models used as baselines. The results prove that TFT models excel in forecasting by effectively identifying multiple patterns, resulting in better predictive accuracy. Furthermore, considering the unique patterns of individual stocks, TFT obtained a remarkable SMAPE of 0.0022.

Author 1: Standy Hartanto
Author 2: Alexander Agung Santoso Gunawan

Keywords: Time series forecasting; stock price prediction; capital market; technical analysis; TFT

PDF

Paper 14: Method for Prediction of Motion Based on Recursive Least Squares Method with Time Warp Parameter and its Application to Physical Therapy

Abstract: We build an exercise therapy support system for children with disabilities that applies artificial intelligence technology. In this study, a 3DCG character shows a model body-building exercise, and at the same time provides feedback such as calling out to the trainee. At that time, to make the exercise therapy work more effectively, the trainee's movement is attempted to be corrected by notifying the trainee with a voice or other means before the trainee's movement deviates significantly from that of the 3DCG character. Since there is inevitably a delay between the movements of the 3DCG characters playing the role of the trainee and the trainer, it is necessary to predict this delay using time series analysis. The Recursive Least-Squares estimation: RLS method was used for this prediction method. In addition, the similarity of the movements of both companies was evaluated using the Dynamic Time Warping: DTW method, and the time warp calculated in this process was used as input for the RLS method. The results of the experiment confirmed that the predictions were made with sufficient accuracy and that when the degree of similarity was low, the 3DCG character playing the trainer's role spoke to them, leading to improvements in the trainees' movements.

Author 1: Kohei Arai
Author 2: Kosuke Eto
Author 3: Mariko Oda

Keywords: Exercise therapy; disabled person; body-building exercise; 3D character; Recursive Least-Squares estimation: RLS method; Dynamic Time Warping: DTW method

PDF

Paper 15: Research of the V2X Technology Organization Model for Self-Managed Technical Equipment

Abstract: The steady progression of information technology today is opening up opportunities for extensive automation across various sectors, including the automotive industry. The active development of IT systems has paved the way for V2X (Vehicle-to-Everything) technology, which enables communication such as "vehicle-to-vehicle" and "vehicle-to-road infrastructure". This article focuses on exploring the use of V2X technology to create "intelligent transportation". Currently, V2X technologies are not widely adopted due to the limited coverage of 5G networks. Although the existing 4G network is adequate for streaming HD content and playing online games, it cannot support the safer and smarter operation required for autonomous cars. Nevertheless, within the 4G network framework, it is possible to develop a comprehensive solution for automating car traffic. This would significantly reduce the number of road accidents and optimize traffic flow. This article explores the implementation of V2X technology in road traffic to achieve these goals.

Author 1: Amir Gubaidullin
Author 2: Olga Manankova

Keywords: V2X; V2V; autonomous vehicles; DSRC; scenarios; frequency spectrum

PDF

Paper 16: IoT-Opthom-CAD: IoT-Enabled Classification System of Multiclass Retinal Eye Diseases Using Dynamic Swin Transformers and Explainable Artificial Intelligence

Abstract: Integrating Internet of Things (IoT)-assisted eye-related recognition incorporates connected devices and sensors for primary analysis and monitoring of eye conditions. Recent advancements in IoT-based retinal fundus recognition utilizing deep learning (DL) have significantly enhanced early analysis and monitoring of eye-related diseases. Ophthalmologists use retinal images in the diagnosis of different eye diseases. Numerous computer-aided diagnosis (CAD) studies have been conducted by using IoT and DL technologies on the early diagnosis of eye-related diseases. The retina is susceptible to microvascular alterations due to numerous retinal disorders. This study creates a new, non-invasive CAD system called IoT-Opthom-CAD. It uses Swin transformers and the gradient boosting (LightGBM) method to find different eye diseases in colored fundus images after applying data augmentations techniques. We introduce a Swin transformer (dc-swin) that is efficient and powerful by connecting a dynamic cross-attention layer to extract local and global features. In practice, this dynamic attention layer suggests a mechanism where the model dynamically focuses on different parts of the image at other times, learning to cross-reference or integrate information across these parts. Next, the LightGBM method is used to divide these features into multiple groups, including normal (NML), diabetic retinopathy (DR), tessellation (TSN), age-related macular degeneration (ARMD), Optic Disc Edema (ODE), and hypertensive retinopathy (HR). To find the causes of eye-related diseases, the Grad-CAM is used as an explainable artificial intelligence (xAI). To develop the Opthom-CAD system, preprocessing, and data augmentation steps are integrated to strengthen this architecture. Multi-label three retinal disease datasets, such as MuReD, BRSET, and OIA-ODIR, are utilized to evaluate this system. After ten times of cross-validation tests, the proposed Opthom-CAD system shows excellent results such as an AUC of 0.95, f1-score of 95.7, accuracy of up to 96.5%, precision of 95%, recall of 94% and f1-score of 95.7. The results indicated that the performance of the Opthom-CAD system is much better than that of numerous baseline state-of-the-art models. As a result, the Opthom-CAD system can assist dermatologists in detecting eye-related diseases. The source code is public and accessible for anyone to view and modify from GitHub (https://github.com/Qaisar256/Opthom-CAD).

Author 1: Talal AlBalawi
Author 2: Mutlaq B. Aldajani
Author 3: Qaisar Abbas
Author 4: Yassine Daadaa

Keywords: Computer-aided diagnosis; ophthalmology; multiclass classification; tessellation; age-related macular degeneration; Optic Disc Edema (ODE); hypertensive retinopathy; data augmentation; transformers; Swin; explainable AI; Internet of Things

PDF

Paper 17: Method for Detecting the Appropriateness of Wearing a Helmet Chin Strap at Construction Sites

Abstract: A novel method for verifying the proper use of helmet chin straps during clothing inspections at construction sites is proposed, prioritizing safety in construction environments. As the problem statement, existing helmet-wearing state detection systems often rely on approaches that might not be optimal. This research aims to address limitations in single-view detection and proposes a multi-view deep learning approach for improved accuracy. The proposed method leverages transfer learning for object detection using well-known models such as YOLOv8 and Detectron2. The annotation process for detecting helmet chin straps was conducted using the COCO format with the assistance of Roboflow. Through experimental analysis, the following findings were observed: Using images captured simultaneously from two different angles of the chin strap condition, Detectron2 demonstrated a remarkable ability to accurately determine the state of helmet usage. It could identify conditions such as the chin strap being removed or loosely fastened with 100% accuracy.

Author 1: Kohei Arai
Author 2: Kodai Beppu
Author 3: Yuya Ifuku
Author 4: Mariko Oda

Keywords: Detectron2; safety-first construction; helmet chin strap; annotation; roboflow; COCO annotator; YOLOv8

PDF

Paper 18: Augmented Reality Development for Garbage Sortation Education for Children

Abstract: The global crisis problem related to climate change, one of the main factors is the accumulation of waste which is getting higher every day. One effective way to reduce the accumulation of waste is by sorting the waste and recycling the waste. However, the waste sorting process in Indonesia is still less effective because only 1.4% can be processed and sorted. One of the biggest causes is a lack of knowledge regarding the types of waste that exist. Based on these problems, the aim of this research is: to create augmented reality-based waste sorting educational technology which is expected to increase knowledge of types of waste and increase environmentally conscious behavior. In addition, the ADDIE development model is used for the research methods that will be used. This research has successfully built an Augmented Reality sorting waste for mobile application and received a good rating on SUS questionnaire and consider acceptable with average score 84.5 out of 100.

Author 1: Devi Afriyantari Puspa Putri
Author 2: Nisa Dwi Septiyanti
Author 3: Endah Sudarmilah
Author 4: Diah Priyawati

Keywords: Augmented reality; pemilahan sampah; Unity3D; vuforia

PDF

Paper 19: The Interplay Between Machine Learning Techniques and Supply Chain Performance: A Structured Content Analysis

Abstract: Over recent years, disruptive technologies have shown considerable potential to improve supply chain efficiency. In this regard, numerous papers have explored the link between machine learning techniques and supply chain performance. However, research works still need more systematization. To fill this gap, this paper aims to systematize published papers highlighting the impact of advanced technologies, such as machine learning, on supply chain performance. A structured content analysis was conducted on 91 selected journal articles from the Scopus and Web of Science databases. Bibliometric analysis has identified nine distinct groupings of research papers that explore the relationship between the machine learning and supply chain performance. These clusters cover topics such as big data and supply chain management, knowledge management, decision-making processes, business process management, and the applications of big data analytics within this domain. Each cluster’s content was clarified through a rigorous systematic literature review. The proposed study can be seen as a kind of comprehensive initiative to systematically map and consolidate this rapidly evolving body of literature. By identifying the key research themes and their interrelationships, this analysis seeks to elucidate the current state-of-the-art and to highlight potential directions for future research in this critical field.

Author 1: Asmaa Es-satty
Author 2: Mohamed Naimi
Author 3: Radouane Lemghari
Author 4: Chafik Okar

Keywords: Bibliometric analysis; machine learning; ProKnow-C methodology; supply chain performance

PDF

Paper 20: A Kepler Optimization Algorithm-Based Convolutional Neural Network Model for Risk Management of Internet Enterprises

Abstract: Internet enterprises, as the representative enterprises of technology-based enterprises, contribute more and more to the growth of the world economy. To ensure the sustainable development of enterprises, it is necessary to predict the risks in the operation of Internet enterprises. An accurate risk prediction model can not only safeguard the interests of enterprises but also provide certain references for investors. Therefore, this study designed a Convolutional Neural Network (CNN) model based on the Kepler optimization algorithm (KOA) for risk prediction of Internet enterprises, aiming to maximize the accuracy of the prediction model, and to help Internet enterprises carry out risk management. Firstly, we select the indicators related to the financial risk of Internet enterprises, and predict the risk based on the traditional statistical analysis of Logistic regression model. On this basis, KOA was improved based on evolutionary strategies and fish foraging strategies, and the improved algorithm was applied to optimize CNN. Based on improved KOA and CNN algorithms, an IKOA-CNN risk prediction model is proposed. Finally, by comparing traditional statistical analysis-based models and other learning-based models, the results show that the IKOA-CNN algorithm proposed in this study has the highest prediction accuracy.

Author 1: Bin Liu
Author 2: Fengjiao Zhou
Author 3: Haitong Jiang
Author 4: Rui Ma

Keywords: Risk management; Kepler optimization algorithm; Convolutional Neural Network; Internet enterprises

PDF

Paper 21: Advancing Urban Infrastructure Safety: Modern Research in Deep Learning for Manhole Situation Supervision Through Drone Imaging and Geographic Information System Integration

Abstract: This paper research introduces a cutting-edge approach to enhancing urban infrastructure safety through the integration of modern technologies. Leveraging state of the art deep learning techniques, specifically the recent object detection models, with a focus on YOLOv8, we propose a system for supervising and detecting manhole situations using drone imagery and GPS location data. Our experiments with object detection models demonstrate exceptional results, showcasing high accuracy and efficiency in the detection of manhole covers and potential hazards in real-time drone imagery. The best trained model is YOLOv8, which achieves a mAP@50 rate of 89% and a Precision rate of 95%, surpassing existing methods. By combining this visual information with precise GPS location data, our system offers a comprehensive solution for monitoring urban landscapes. The integration of YOLOv8 not only improves the efficiency of manhole detection but also contributes to proactive maintenance and risk mitigation in urban environments. This research represents also a significant step forward in leveraging modern research methodologies, and the outstanding results of our trained models underscore the effectiveness of Object detection models in addressing critical infrastructure challenges.

Author 1: Ayoub Oulahyane
Author 2: Mohcine Kodad

Keywords: Urban infrastructure safety; object detection; Deep Learning (DL); UAV (Drones); Computer Vision (CV)

PDF

Paper 22: Differential Privacy Federated Learning: A Comprehensive Review

Abstract: Federated Learning (FL) has received a lot of attention lately when it comes to protecting data privacy, especially in industries with sensitive data like healthcare, banking, and the Internet of Things (IoT). However, although FL protects privacy by not sharing raw data, the information transfer during its model update process can still potentially leak user privacy. Differential Privacy (DP), as an advanced privacy protection technology, introduces random noise during data queries or model updates, further enhancing the privacy protection capability of Federated Learning. This paper delves into the theory, technology, development, and future research recommendations of Differential Privacy Federated Learning (DP-FL). Firstly, the article introduces the basic concepts of Federated Learning, including synchronous and asynchronous optimization algorithms, and explains the fundamentals of Differential Privacy, including centralized and local DP mechanisms. Then, the paper discusses in detail the application of DP in Federated Learning under different gradient clipping strategies, including fixed clipping and adaptive clipping methods, and explores the application of user-level and sample-level DP in Federated Learning. Finally, the paper discusses future research directions for DP-FL, emphasizing advancements in asynchronous DP-FL and personalized DP-FL.

Author 1: Fangfang Shan
Author 2: Shiqi Mao
Author 3: Yanlong Lu
Author 4: Shuaifeng Li

Keywords: Federated learning; differential privacy; privacy protection; gradient clipping

PDF

Paper 23: Predictive Modeling of Student Performance Using RFECV-RF for Feature Selection and Machine Learning Techniques

Abstract: Predicting student performance has become a strategic challenge for universities, essential for increasing student success rates, retention, and tackling dropout rates. However, the large volume of educational data complicates this task. Therefore, many research projects have focused on using Machine Learning techniques to predict student success. This study aims to propose a performance prediction model for students at IBN ZOHR University in Morocco. We employ a combination of Random Forest and Recursive Feature Elimination with Cross-Validation (RFECV-RF) for optimal feature selection. Using these features, we build classification models with several Machine Learning algorithms, including AdaBoost, Logistic Regression (LR), k-Nearest Neighbors (k-NN), Naive Bayes (NB), Support Vector Machines (SVM), and Decision Trees (DT). Our results show that the SVM model, using the 8 features selected by RFECV-RF, outperforms the other classifiers with an accuracy of 87%. This demonstrates the effectiveness and efficiency of our feature selection method and the superiority of the SVM model in predicting student performance.

Author 1: Abdellatif HARIF
Author 2: Moulay Abdellah KASSIMI

Keywords: Student performance prediction; Recursive Feature Elimination (RFE); cross-validation; Random Forest (RF); feature selection; IBN ZOHR University

PDF

Paper 24: A Novel and Refined Contactless User Feedback System for Immediate On-Site Response Collection

Abstract: This paper introduces a Contactless User Feedback System (CUFS) that provides an innovative solution for capturing user feedback through hand gestures. It comprises a User Feedback Device (UFD), a mobile application, and a cloud database. The CUFS operates through a structured sequence, guiding users through a series of questions displayed on an LCD. Using the Pi Camera V2 for contactless hand shape capture, users can express feedback through recognized hand signs. A live video feed enhances user accuracy, while secure data transmission to a database ensures comprehensive feedback collection, including timestamp, date, location, and a unique identifier. A mobile application offers real-time oversight for administrators, presenting facility status insights, data validation outcomes, and customization options for predefined feedback categories. This study also identifies and strategically addresses challenges in image quality, responsiveness, and data validation to enhance the CUFS's overall performance. Innovations include optimized lighting for superior image quality, a parallel multi-threading approach for improved responsiveness, and a data validation mechanism on the server side. The refined CUFS demonstrates recognition accuracies consistently surpassing 93%, validating the effectiveness of these improvements. This paper presents a novel and refined CUFS that combines hardware and software components, contributing significantly to the advancement of contactless human-computer interaction and Internet of Things-based systems.

Author 1: Harold Harrison
Author 2: Mazlina Mamat
Author 3: Farrah Wong
Author 4: Hoe Tung Yew

Keywords: Contactless; human-computer interaction; Internet of Things; machine learning

PDF

Paper 25: A Facial Expression Recognition Method Based on Improved VGG19 Model

Abstract: With the increasing demand for human-computer interaction and the development of emotional computing technology, facial expression recognition has become a major focus in research. In this paper, an improved VGG19 network model is proposed by involving enhancement strategies, and the facial expression recognition process with the improved VGG19 model is provided. We validated the model on FER2013 and CK+ datasets and conducted comparative experiments on facial expression recognition accuracy among the improved VGG19 and other classic models, including the original VGG19. Instance tests were also performed, using probability histograms to reflect the effectiveness of expression recognition. These experiments and tests demonstrate the superiority, as well as the applicability and stability of the improved VGG19 model on facial expression recognition.

Author 1: Lihua Bi
Author 2: Shenbo Tang
Author 3: Canlin Li

Keywords: Facial expression recognition; deep learning; VGG19 model

PDF

Paper 26: Application of Optimizing Multifactor Correction in Fatigue Life Prediction and Reliability Evaluation of Structural Components

Abstract: Multi factor correction is optimized for fatigue life prediction and reliability evaluation of structural components. Based on the optimization of Bayesian theory, reliability evaluation is carried out to improve the efficiency of fatigue life prediction and reliability evaluation of structural components. The research results indicate that the crack propagation length increases with the increase in loading time. The average probability density of the modified method is 3.628, while the probability density of the traditional fracture mechanics model is 1.242. Based on the multi factor modified crack propagation prediction model, the predicted data accuracy exceeds the traditional fracture mechanics model. It is consistent with the experimental results. The crack propagation prediction model based on multi factor correction can ensure the accuracy of the prediction. The reliability of the model is evaluated. The average prediction accuracy of multiple sets of data is over 90%. This research method helps predict the fatigue life of structural components and evaluate reliability to ensure the safe operation of construction machinery.

Author 1: Yi Zhang

Keywords: Multi factor bayesian theory correction; structural components; fatigue life; reliability; Bayesian theory

PDF

Paper 27: Recent Advances in Medical Image Classification

Abstract: Medical image classification is crucial for diagnosis and treatment, benefiting significantly from advancements in artificial intelligence. The paper reviews recent progress in the field, focusing on three levels of solutions: basic, specific, and applied. It highlights advances in traditional methods using deep learning models like Convolutional Neural Networks and Vision Transformers, as well as state-of-the-art approaches with Vision-Language Models. These models tackle the issue of limited labeled data, and enhance and explain predictive results through Explainable Artificial Intelligence.

Author 1: Loan Dao
Author 2: Ngoc Quoc Ly

Keywords: Medical Image Classification (MIC); Artificial Intelligence (AI); Vision Transformer (ViT); Vision-Language Model (VLM); eXplainable AI (XAI)

PDF

Paper 28: Revolutionizing Esophageal Cancer Diagnosis: A Deep Learning-Based Method in Endoscopic Images

Abstract: Esophageal cancer (EC) is a severe and commonly increasing disease due to the uncontrolled growth in the esophagus. It is the sixth leading cause of cancer-related deaths worldwide. The traditional methods for the diagnosis of EC are not only time-consuming but also suffer from inconsistencies due to human factors such as experience and fatigue. This paper proposes a deep learning (DL) approach for the detection of EC from endoscopic images to improve efficiency and accuracy. The study utilizes an endoscopic image dataset of 2000 images evenly split into cancerous and non-cancerous cases. After image preprocessing and augmentation, these images are fed into the proposed Inception ResNet V2 model. The extracted features were processed by the final classification layers and produced class probabilities. The simulation results revealed that the suggested model attained 98.50% of accuracy, 97.50% of precision, 98.75% of recall and 98.00% of F1 score after fine-tuning. These results underscore the model's capability to accurately identify EC, minimizing false positives and enhancing diagnostic reliability. The proposed DL framework for automated EC detection, promising advancements in clinical workflows and patient care.

Author 1: Shincy P Kunjumon
Author 2: S Felix Stephen

Keywords: Deep learning; esophagus cancer; transfer learning; endoscopic images; inception ResNet V2; fine tuning

PDF

Paper 29: A Blockchain Framework for Academic Certificates Authentication

Abstract: This paper proposes a framework to solve academic certificate fraud by implementing a blockchain network. A permission Hyperledger Fabric network is deployed to store students’ information and allows the proper access to guarantee the system's security. The paper discusses several studies that introduce variants of solutions for the academic certification tampering problem by using blockchain technology. It finds Hyperledger Fabric secure, and performant with higher TPS than Bitcoin and Ethereum; latency increases with participant number.

Author 1: Ruqaya Abdelmagid
Author 2: Mohamed Abdelsalam
Author 3: Fahad Kamal Alsheref

Keywords: Academic certificates; tampering; security; blockchain; Hyperledger Fabric; Ethereum; channels; nodes; peers; Chaincode

PDF

Paper 30: DGA Domain Name Detection and Classification Using Deep Learning Models

Abstract: In today's cyber environment, modern botnets and malware are increasingly employing domain generation mechanisms to circumvent conventional detection solutions reliant on blacklisting or statistical methods for malicious domains. These outdated methods prove inadequate against algorithmically generated domain names, presenting significant challenges for cyber security. Domain Generation Algorithms (DGAs) have become essential tools for many malware families, allowing them to create numerous DGA domain names to establish communication with C&C servers. Consequently, detecting such malware has become a formidable task in cyber security. Traditional approaches to domain name detection rely heavily on manual feature engineering and statistical analysis, with classifiers designed to differentiate between legitimate and DGA domain names. In this study, we propose a novel approach to classify and detect algorithmically generated domain names. The deep learning architectures, including LSTM, RNN and GRU are trained and evaluated for their effectiveness in distinguishing between legitimate and malicious domain names. The performance of each model is evaluated using standard metrics such as precision, recall, and F1-score. The findings of this research have significant implications for cyber security defense strategies. Our experimental findings illustrate that the proposed model outperforms current state-of-the-art methods in both DGA domain name classification and detection. Our proposed model achieved 99% accuracy for DGA classification. By integrating additional feature extraction and knowledge-based methods our proposed model surpasses existing models. The experimental outcomes suggest that our proposed model gated recurrent unit can achieve 99% accuracy, a 94% recall rate, and a 98% F1-score for the detection and classification of DGA-generated domain names.

Author 1: Ranjana B Nadagoudar
Author 2: M Ramakrishna

Keywords: Botnet; cyber security; Domain Generation Algorithms (DGAs); gated recurrent unit; Domain Name System (DNS)

PDF

Paper 31: Oversampling Social Media-Sourced Image Datasets for Better Deep Learning Classification of Natural Disaster Damage Levels

Abstract: People in areas affected by natural disasters and use social media websites such as Facebook, Twitter (also known as “X”) and Instagram tend to post images of damage to their surroundings. These social media sites have become vital sources of immediate and highly available data for providing situational awareness and organisation for natural disaster response. A few previous attempts at classifying the level of natural disaster damage in these images using image processing techniques had noted the challenge in producing robust classification models due to the effect of overfitting caused by a lack of observations and data imbalance in annotated datasets. This article shows an attempt to improve a training strategy within the data level for deep learning models such as VGG16, ResNetV2 and EffecientNetV2, used to estimate the level of disaster damage in images by training them with data generated using image data augmentation with data balancing, oversampling up to eight times and combining the oversampled image data collections. The F-1 score achieved for classifying damage on earthquake images and images from the Hurricane Matthew data collection by training EfficientNetV2 on a generated dataset made with a combination of oversampled data surpassed previous benchmark results. These results show that using data balancing and oversampling on the dataset prior to training deep learning models on these datasets result in increased robustness.

Author 1: Nicholas Lau Kheng Seng
Author 2: Goh Wei Wei
Author 3: Tan Ee Xion

Keywords: Deep learning; image processing; oversampling; image data augmentation

PDF

Paper 32: Large-Scale Image Indexing and Retrieval Methods: A PRISMA-Based Review

Abstract: Large-scale image indexing and retrieval are pivotal in artificial intelligence, especially within computer vision, for efficiently organizing and accessing extensive image databases. This systematic literature review employs the PRISMA (Preferred Reporting Items for Systematic Reviews and Meta-Analyses) methodology to thoroughly analyze and synthesise the current research landscape in this domain. Through meticulous research and a stringent selection process, this study uncovers significant trends, pioneering methodologies, and ongoing challenges in large-scale image indexing and retrieval. Key findings reveal a growing adoption of deep learning techniques, the integration of multimodal data to improve retrieval accuracy, and persistent challenges related to scalability and real-time processing. These insights offer a valuable resource for researchers and practitioners striving to enhance the efficiency and effectiveness of image indexing and retrieval systems.

Author 1: Abdelkrim Saouabe
Author 2: Said Tkatek
Author 3: Hicham Oualla
Author 4: Carlos SOSA Henriquez

Keywords: Image indexing; image retrieval; similarity; PRISMA; computer vision

PDF

Paper 33: Semi-Supervised Clustering Algorithms Through Active Constraints

Abstract: Pairwise constraints improve clustering performance in constraint-based clustering issues, especially since they are applicable. However, randomly choosing these constraints may be adverse and minimize accuracy. To address the problem of random choosing pairwise constraints, an active learning method is used to identify the most informative constraints, which are then selected by the active learning technique. In this research, we replaced random selection with an active learning strategy. We provide a semi-supervised selective affinity propagation clustering approach with active constraints, which combines the affinity propagation (AP) clustering algorithm with prior information to improve semi-supervised clustering performance. Based on the neighborhood concept, we select the most informative constraints where neighborhoods include labelled examples of various clusters. The experimental results on eight real datasets demonstrate that the proposed method in this paper outperforms other baseline methods and that it can improve clustering performance significantly.

Author 1: Abdulwahab Ali Almazroi
Author 2: Walid Atwa

Keywords: Semi-supervised; pairwise constraints; affinity propagation; active learning

PDF

Paper 34: Using Deep Learning on Retinal Images to Classify the Severity of Diabetic Retinopathy

Abstract: Diabetic retinopathy (DR) is a leading cause of blindness worldwide, particularly among working-age individuals. With the increasing prevalence of diabetes, there is an urgent need to address the public health burden posed by DR. This research paper aims to develop a clinical decision support approach that integrates automated DR detection and classifying the grade of severity in DR. A three-stage deep learning model for DR detection is proposed. First, incorporating preprocessing, image enhancement, and augmenting the DR images using three different color space transformations and a filtering technique: BGR to RGB, RGR to LAB, and Gaussian Blur Filter. Secondly, feature extraction and representation learning are based on CNN with various layers. Thirdly, classification is based on SVM. The implementation and evaluation of the proposed model on a dataset containing five stages of DR are essential steps towards validating its performance and assessing its potential for clinical applications. Through thorough dataset preprocessing, model training, performance analysis, comparison with baseline methods, and generalization tests, we can gain insights into the model's classification and staging capabilities. This research makes a significant contribution to the field of DR severity detection, ultimately leading to enhanced diagnostic capabilities. The developed models demonstrated an accuracy rate of 94.72%, indicating their efficacy in accurately assessing the severity of the condition.

Author 1: Shereen A. El-aal
Author 2: Rania Salah El-Sayed
Author 3: Abdulellah Abdullah Alsulaiman
Author 4: Mohammed Abdel Razek

Keywords: Deep learning; diabetic retinopathy (DR); Gaussian Blur Filter; support vector machine (SVM); color space; performance evaluations

PDF

Paper 35: An Efficient and Secure Access Authorization Policy for Cloud Storage Resources Based on Fuzzy Searchable Encryption

Abstract: When fuzzy searchable encrypted cloud storage resources are available, keywords are allowed to have a certain range of changes. Even if there are slight differences in the spelling, word order, or spacing between words, the correct data can be matched. Therefore, it does not have the effect of fine-grained access control (FGAC). Consequently, to satisfy the security demands of cloud storage assets and the ease of resource retrieval through fuzzy searchable encryption, CP-ABE employs attribute and policy definitions to introduce a novel, effective security access authorization approach for cloud storage assets utilizing fuzzy searchable encryption technology. Encrypt cloud storage resources after keyword preprocessing through initialization, file encryption and decryption, index generation encryption, search, and other steps; use the wildcard-based method to generate indexes; and use the Bloom filter to generate security traps to achieve Pail lier-based asymmetric fuzzy searchable encryption of resources. In combination with the CP-ABE-based access control method, authorized users are assigned private keys in the authorization center to ensure that unauthorized users cannot obtain cloud storage resources and complete the fuzzy searchable encryption access authorization of cloud storage resources. The experiment shows that the search index generation of this strategy greatly reduces the resource utilization rate and effectively improves the fuzzy search speed. Moreover, the combination of fuzzy searchable encryption and CP-ABE can better ensure full cloud storage resources.

Author 1: Jun Fu

Keywords: Fuzzy search encryption; cloud storage; security access; CP-ABE (Ciphertext-Policy Attribute-Based Encryption); access control; authorization policy

PDF

Paper 36: Comparison of Different Models for Traffic Signs Under Weather Conditions Using Image Detection and Classification

Abstract: This study focuses on enhancing the accuracy of traffic sign detection systems for self-driving. With the increasing proliferation of autonomous vehicles, reliable detection and interpretation of traffic signs is crucial for road safety and efficiency. The primary goal of this research was to improve the performance of traffic sign detection, particularly in identifying unfamiliar signs and dealing with adverse weather conditions. We obtained a dataset of 3,480 images from Roboflow and utilized deep learning techniques, including Convolutional Neural Networks (CNNs) and algorithms such as YOLO and the Vision Engineering (VGG) toolkit. Unlike previous studies that focused on a single version of YOLO, this study conducted a comparative analysis of different deep-learning models, including YOLOv5, YOLOv8, and VGG-16. The study results show promising outcomes, with YOLOv5 achieving an accuracy of up to 94.2%, YOLOv8 reaching 95.3% accuracy, and VGG-16 outperforming the other techniques with an impressive 98.68% accuracy. These findings highlight the significant potential for future advancements in traffic sign detection systems, contributing to the ongoing efforts to enhance the safety and efficiency of autonomous driving technologies.

Author 1: Amal Alshahrani
Author 2: Leen Alshrif
Author 3: Fatima Bajawi
Author 4: Razan Alqarni
Author 5: Reem Alharthi
Author 6: Haneen Alkurbi

Keywords: Traffic signs; detection; classification; YOLO; VGG16

PDF

Paper 37: Ensemble IDO Method for Outlier Detection and N2O Emission Prediction in Agriculture

Abstract: Nitrous oxide (N2O) emissions from agricultural activities significantly contribute to climate change, necessitating accurate predictive models to inform mitigation strategies. This study proposes an ensemble framework combining Isolation Forest, DBSCAN, and One-Class SVM to enhance outlier detection in N2O emission datasets. The dataset, consisting of 2,246 rows and 21 columns, was preprocessed to address missing values and normalize data. Outlier detection was performed using each method individually, followed by integration through hard and soft voting techniques. The results revealed that Isolation Forest identified 113 outliers, DBSCAN detected 1,801, and One-Class SVM found 118. Hard voting identified 165 outliers, while soft voting detected 734, ensuring a refined dataset for subsequent modeling. The ensemble approach improved the accuracy of the XGBoost model for N2O emission prediction. The best results were obtained using the Random Search Cross Validation hyperparameter tuning, with a test size is 20%, achieving a CV MSE of 0.0215, MSE of 0.0144, RMSE of 0.1200, MAE of 0.0723, and an R² of 0.6750. This study demonstrates the effectiveness of combining multiple outlier detection methods to enhance data quality and model performance, supporting more reliable predictions of N2O emissions.

Author 1: Ahmad Rofiqul Muslikh
Author 2: Pulung Nurtantio Andono
Author 3: Aris Marjuni
Author 4: Heru Agus Santoso

Keywords: Ensemble framework; outlier; detection; N2O emission; isolation forest; DBSCAN; one-class SVM

PDF

Paper 38: Fire Evacuation Path Planning Based on Improved MADDPG (Multi-Agent Deep Deterministic Policy Gradient) Algorithm

Abstract: The lack of a scientific and reasonable optimal evacuation path planning scheme is one of the main causes of casualties in fire accidents. In addition to the high temperature and harmful smoke in the fire environment, the crowding problem caused by the change of the position of the crowd in the evacuation process will also affect the evacuation effect. Therefore, by improving the multi-agent depth deterministic strategy gradient algorithm, an AMADDPG (Adjacency Multi-agent Deep Deterministic Policy Gradient) model suitable for fire evacuation is proposed. First, the dangerous grid area is defined, and the influence of congestion degree and nearest exit is considered at the same time. The learning framework of "distributed execution and centralized local learning" is adopted to realize experience sharing among neighboring agents. Improve the learning efficiency and evacuation effect of the model. The experimental results show that the model can basically adapt to the complex and dynamic fire environment well, achieve the optimal path planning within 30, and ensure that the degree of congestion on the evacuation path is maintained within 0.5, which can achieve the safe evacuation goal. Meanwhile, compared with the MADDPG algorithm, the model has obvious advantages in terms of training efficiency and stability. It has good application value.

Author 1: Qiong Huang
Author 2: Ying Si
Author 3: Haoyu Wang

Keywords: Fire evacuation path; congestion degree; dangerous grid; multi-agent; Multi-Agent Deep Deterministic Policy Gradient

PDF

Paper 39: Students’ Perceptions of Its Usefulness and Ease of Use on Learning Management System

Abstract: The importance of the Learning Management System (LMS) has been discussed over recent years as it is crucial for students to manage this tool for their learning. The study's objective was to ascertain whether learners believe the LMS satisfies their learning goals and to bridge the gap between the growing body of research on learner-centered instructional design and LMS design. A survey was carried out with 528 students to get the data. The results revealed that most of the learners agreed that LMS is a useful tool to enhance their learning. This proves that LMS can be used as a device to make their learning better and more effective. The study's conclusions could be used as a guide for the university's administration as it adopted pertinent digital technologies, with the goal of creating an efficient implementation strategy that would enhance service delivery. Universities and colleges would benefit from this established approach in selecting the best learning management system (LMS) to meet their diverse needs. It will also act as a guide for developers who want to create an assessment system.

Author 1: Linda Khoo Mei Sui
Author 2: Nurlisa Loke Abdullah
Author 3: Subatira Balakrishnan
Author 4: Wan Sofiah Meor Osman

Keywords: Learning management system; perceptions; usefulness; ease of use

PDF

Paper 40: A Multi-Reading Habits Fusion Adversarial Network for Multi-Modal Fake News Detection

Abstract: Existing multimodal fake news detection methods face three challenges: the lack of extraction for implicit shared features, shallow integration of multimodal features, and insufficient at-tention to the inconsistency of features across different modali-ties. To address these challenges, a multi-reading habits fusion adversarial network for multimodal fake news detection is pro-posed. In this model, to mitigate the influence of feature changes due to events and emotions, a dual discriminator based on do-main adversarial training is built to extract invariant common features. Inspired by the diverse reading habits of individuals, three fundamental reading habits are identified, and a multi-reading habits fusion layer is introduced to learn the interde-pendencies among the multimodal feature representations of the news. To investigate the semantic inconsistencies of different modalities in news, a similarity constraint reasoning layer is proposed, which first explores the semantic consistency between image descriptions and unimodal features, and then delves into the semantic discrepancies between unimodal and multimodal features. Extensive experimentation has been carried out on the multimodal datasets of Weibo and Twitter. The outcomes indi-cate that the proposed model surpasses the performance of mainstream advanced benchmarks on both platforms.

Author 1: Bofan Wang
Author 2: Shenwu Zhang

Keywords: Multimodal fake news detection; feature extraction; feature fu-sion; consistency alignment

PDF

Paper 41: Comparison of Resnet Models in UNet Classifier for Mapping Oil Palm Plantation Area with Semantic Segmentation Approach

Abstract: In 2023, Indonesia experienced an increase Industrial oil palm plantations grew by 116,000 hectares in 2023, an increase of 54% from the previous year. Oil palm is one of the main agricultural commodities in Indonesia, with a significant contribution to the national economy. However, manually mapping and monitoring oil palm land is still a big challenge. This manual process is labor-intensive, time-consuming and costly. In addition, the accuracy of the data generated is often inadequate, especially in identifying the actual crop condition and land area. Remote sensing (RS) provides extensive and comprehensive data on oil palm land and crop conditions through satellite and drone imagery. In this research, a method of mapping oil palm plantations is proposed using medium resolution sentinel satellite imagery data that is widely available and has adequate spatial resolution. In addition, it is proposed to implement the artificial intelligence (AI) method with deep learning (DL) using the UNet classifier which has been proven in previous studies to provide sufficient accuracy. The research will develop a DL model/architecture with ResNet-34 and ResNet-50 backbones that are expected to further improve the accuracy of segmentation results so that it can be used in oil palm land mapping. The research concluded that semantic segmentation using the UNet classifier with ResNet-34 and ResNet-50 backbone produced F1 scores of 0.89 and 0.922, respectively. The accuracy obtained at the inference/deployment model stage for each ResNet-34, and ResNet-50 backbone was 88.8% with an inference duration of 10 minutes and 91.8% with an inference duration of 20 minutes.

Author 1: Fepri Putra Panghurian
Author 2: Hady Pranoto
Author 3: Edy Irwansyah
Author 4: Fabian Surya Pramudya

Keywords: Deep learning; UNet; ResNet; oil palm; semantic segmentation

PDF

Paper 42: Enhancing Customer Experience Through Arabic Aspect-Based Sentiment Analysis of Saudi Reviews

Abstract: Big brands thrive in today's competitive marketplace by focusing on customer experience through product reviews. Manual analysis of these reviews is labor-intensive, necessitating automated solutions. This paper conducts aspect-based sentiment analysis on Saudi dialect product reviews using machine learning and NLP techniques. Addressing the lack of datasets, we create a unique dataset for Aspect-Based Sentiment Analysis (ABSA) in Arabic, focusing on the Saudi dialect, comprising two manually annotated datasets of 2000 reviews each. We experiment with feature extraction techniques such as Part-of-Speech tagging (POS), Term Frequency-Inverse Document Frequency (TF-IDF), and n-grams, applying them to machine learning algorithms including Support Vector Machine (SVM), Random Forest (RF), Naive Bayes (NB), and K-Nearest Neighbors (KNN). Our results show that for electronics reviews, RF with TF-IDF, POS tagging, and tri-grams achieves 86.26% accuracy, while for clothes reviews, SVM with TF-IDF, POS tagging, and bi-grams achieves 86.51% accuracy.

Author 1: Razan Alrefae
Author 2: Revan Alqahmi
Author 3: Munirah Alduraibi
Author 4: Shatha Almatrafi
Author 5: Asmaa Alayed

Keywords: Customer experience; Arabic natural language processing; sentiment analysis; Arabic Aspect-Based Sentiment Analysis; online reviews; review analytic; e-commerce; business owners

PDF

Paper 43: A Comprehensive Study on Crude Oil Price Forecasting in Morocco Using Advanced Machine Learning and Ensemble Methods

Abstract: This study employs a range of machine learning models to forecast crude oil prices in Morocco, including Linear Regression, Random Forest, Support Vector Regression (SVR), XGBoost, ARIMA, Prophet and Gradient Boosting. Among these, SVR demonstrated the highest accuracy with an RMSE of 1.414. Additionally, the ARIMA and Prophet models were evaluated, yielding RMSEs of 2.46 and 1.41, respectively. An ensemble model, which combines predictions from all the individual models, achieved an RMSE of 2.144, indicating robust performance. Projections for 2024-2027 show a rising trend in crude oil prices, with the SVR model forecasting 21.91 MAD in 2027, and the ensemble model predicting 14.47 MAD. These findings underscore the effectiveness of ensemble learning and advanced machine learning techniques in producing reliable economic forecasts, offering valuable insights for stakeholders in the energy sector.

Author 1: Hicham BOUSSATTA
Author 2: Marouane CHIHAB
Author 3: Younes CHIHAB

Keywords: Crude oil prices; machine learning; ensemble model; economic forecasts; energy sector

PDF

Paper 44: Children's Expression Recognition Based on Multi-Scale Asymmetric Convolutional Neural Network

Abstract: This paper proposes a multi-scale asymmetric convolutional neural network (MACNN), specifically designed to tackle the challenges encountered by traditional convolutional neural networks in the realm of children's facial expression recognition. MACNN addresses problems like low accuracy from facial expression changes, poor generalization across datasets, and inefficiency in traditional convolution operations. The model introduces a multi-scale convolution layer for capturing diverse features, enhancing feature extraction and recognition accuracy. Additionally, an asymmetric convolutional layer is integrated to learn directional features, improving robustness and generalization in facial expression analysis. Post-training, this layer can revert to a standard square convolutional layer, optimizing efficiency for child expression recognition. Experimental results indicate that the proposed algorithm achieves a recognition accuracy of 63.35% on a self-constructed children's expression dataset, under the configuration of a GPU Tesla P100 with 16GB video memory. This performance exceeds all comparative algorithms and maintains efficient recognition. Furthermore, the algorithm attains a recognition accuracy of 78.26% on the extensive natural environment expression dataset RAF-DB, highlighting its robustness, generalization capability, and potential for practical application.

Author 1: Pengfei Wang
Author 2: Xiugang Gong
Author 3: Qun Guo
Author 4: Guangjie Chang
Author 5: Fuxiang Du

Keywords: Children's expression recognition; convolutional neural network; multi-scale asymmetric convolutional neural network; asymmetric convolutional layers

PDF

Paper 45: Reinforcement Learning Driven Self-Adaptation in Hypervisor-Based Cloud Intrusion Detection Systems (RLDAC-IDS)

Abstract: With the rise in cloud adoption, securing dynamic virtual environments remains a significant challenge. While traditional Intrusion Detection Systems (IDS) have attempted to address security concerns in the cloud mostly through static detection rules and without adaptation capabilities to identify new attack vectors, a self-optimizing framework called Reinforcement Learning-Driven Self-Adaptation in Hypervisor-Based Cloud Intrusion Detection Systems (RLDAC-IDS) is suggested to overcome this limitation. RLDAC-IDS leverages the inherent visibility of hypervisors into virtualized resources to gain valuable insights into cloud operations and threats. Its key components include real-time behavioral analysis, anomaly detection, and identification of known threats. The innovation of RLDAC-IDS lies in the incorporation of reinforcement learning to continuously improve the detection rules and responses. RLDAC-IDS exemplifies intelligent intrusion detection through its ability to learn and adapt to new threat patterns autonomously. By continuous optimization and intelligent intrusion detection techniques, the system progresses to tackle emerging attack vectors while minimizing false alarms. In contrast, RLDAC-IDS is highly adaptive and can easily adjust to the changing conditions of cloud environments. In summary, RLDAC-IDS represents a major advancement in cloud IDS through its adaptive, self-learning approach, overcoming the limitations of existing solutions to provide robust protection amidst the complexities and dynamics of modern virtualized settings.

Author 1: Alaa A. Qaffas

Keywords: Cloud security; intrusion detection system; adaptive framework; hypervisor-based IDS; self-adaptation; emerging threat detection; reinforcement learning; behavioral analysis; cloud computing; intelligent intrusion detection

PDF

Paper 46: Adaptive Language-Interacted Hyper-Modality Representation for Multimodal Sentiment Analysis

Abstract: In an attempt to mitigate the problem of neglecting unimodal information and incorporating emotionally unrelated data during the fusion process of multimodal representation, this study presents an adaptive language interaction representation (Adaptive Language-interacted Representation, ALR) model in this study. Initially, the unimodal representation module is utilized to obtain a minimal but adequate representation of the unimodal information. Subsequently, we acknowledge that video and audio modalities may contain sentiment data that is not relevant. To address this issue, hyper-modality representation is constructed to mute the impact of irrelevant sentimental information. This is achieved through interaction among text, video and audio features. Finally, the hyper-modality representation is integrated through multimodal fusion module, harnessing more efficient multimodal sentiment analysis. On the datasets CMU-MOSEI, MELD and IEMOCAP, the model outperforms the major of existing sentiment analysis models.

Author 1: Lei Pan
Author 2: WenLong Liu

Keywords: Multimodal; multimodal fusion; sentiment analysis; adaptive language-interacted

PDF

Paper 47: Q-learning Guided Grey Wolf Optimizer for UAV 3D Path Planning

Abstract: Path planning is a critical component of autonomous unmanned aerial vehicle (UAV) navigation systems, yet traditional and sampling-based methods encounter limitations in three-dimensional (3D) path planning. This paper offers a structured review of applicable algorithms in 3D space, introduces the state-of-the-art techniques, and addresses cutting-edge challenges associated with UAV heuristic decomposition methods. Furthermore, we develop a Q-learning guided grey wolf optimizer (QGWO) to tackle the UAV 3D path planning problem in complex scenarios. QGWO incorporates two exploration strategies from the aquila optimizer into the grey wolf optimizer, enhancing its capacity to escape local optima and utilize the population for broader exploration. Q-learning guides the search process, enabling the algorithm to store iterative information, accelerate convergence, and balance exploration and exploitation. Additionally, Laplace crossover perturbs the positions of the α and β wolves, preventing the algorithm from becoming trapped in local optima. To validate its effectiveness, QGWO and ten advanced heuristic algorithms were tested in 3D path planning simulations across six terrain scenarios of varying complexity. Experimental results demonstrate that QGWO achieves optimal cost metrics, outperforming the original grey wolf optimizer by up to 1.34% and significantly surpassing other algorithms with a 70.92% reduction in standard deviation. This highlights the effectiveness and robustness of QGWO in 3D path planning for UAV. Moreover, the Wilcoxon rank sum test shows that the null hypothesis is rejected in 98.33% of cases, confirming the statistical superiority of the proposed QGWO.

Author 1: Binbin Tu
Author 2: Fei Wang
Author 3: Xiaowei Han
Author 4: Xibei Fu

Keywords: Q-learning; grey wolf optimizer; laplace crossover; 3D path planning; optimization

PDF

Paper 48: Security Enhanced Edge Computing Task Scheduling Method Based on Blockchain and Task Cache

Abstract: Aiming at edge computing nodes' limited computing and storage capacity, a two-layer task scheduling model based on blockchain and task cache was proposed. The high-similarity task results were cached in the edge cache pool, and the blockchain-assisted task caching model was combined to enhance system security. The genetic evolution algorithm was used to solve the minimum cost that the optimal scheduling model can obtain. The genetic algorithm’s initialization and mutation operations were adjusted to improve the convergence rate. Compared with algorithms without cache pooling and blockchain, the proposed joint blockchain and task caching task scheduling model reduced the cost by 9.4% and 14.3%, respectively. As the capacity space of the cache pool increased, the system cost gradually decreased. Compared with the capacity space of 3GB, the system cost of 10Gbit capacity space was reduced by 10.6%. The system cost decreased as the computing power of edge nodes increased. Compared with edge nodes with a computing frequency of 8GHz, the nodes cost at 18GHz was reduced by 36.4%. Therefore, the proposed edge computing task scheduling model ensures the security of task scheduling based on reducing delay and control costs, providing a foundation for modern industrial task scheduling.

Author 1: Cong Li

Keywords: Blockchain; task cache; edge computing; task scheduling; industrial internet

PDF

Paper 49: Advanced Fusion of 3D U-Net-LSTM Models for Accurate Brain Tumor Segmentation

Abstract: Accurate detection and segmentation of brain tumors are essential in tomography for effective diagnosis and treatment planning. This study presents advancements in 3D segmentation techniques using data from the Kaggle BRATS 2020 dataset. To enhance the reliability of brain tumor diagnosis, innovative approaches such as Frost filter-based preprocessing, UNet segmentation architecture, and Long Short-Term Memory (LSTM) segmentation are employed. The methodology starts with data preprocessing using the Frost filter, which effectively reduces noise and enhances image clarity, thus improving segmentation accuracy. Subsequently, the UNet architecture is utilized to precisely segment brain tumor regions. UNet's ability to capture contextual information and its efficient use of skip connections contribute to accurately delineating tumor boundaries in three-dimensional space. Additionally, the temporal aspect of brain tumor progression is addressed by employing an LSTM network, which increases segmentation accuracy. The LSTM algorithm integrates temporal patterns in sequential imaging data, enabling reliable segmentation of tumor presence and characteristics over time. By analyzing the ordered sequence of continuous MRI scans, the LSTM framework achieves more precise and adaptable tumor recognition. Evaluation results based on the Kaggle BRATS 2020 dataset demonstrate significant improvements in segmentation and segmentation performance compared to previous methods. The proposed approach enhances the accuracy of tumor boundary delineation and the ability to classify tumor types and track temporal changes in tumor growth. The "U-Net-LSTM" method achieves an accuracy of 98.9% in segmentation tasks, showcasing its superior performance compared to other techniques. This method is implemented using Python, underscoring its efficacy in achieving high accuracy in segmentation tasks.

Author 1: Ravikumar Sajjanar
Author 2: Umesh D. Dixit

Keywords: Brain tumor segmentation; frost filter pre-processing; UNet architecture; LSTM; kaggle BRATS 2020 dataset

PDF

Paper 50: Deployment of Secure Data Parameters Between Stock Inverters and Interfaces Using Command-Contamination-Stealth Management System

Abstract: The security issues more impact on stock data which allows the stockholders (SHs) and stock-inverters (SIs) to predict and invert false assets and stock values. Because of the security flaws and threads that let an attacker take over network devices, the attacker uses the system to attack another system. These problems have an even greater influence on stock data, which gives stockholders (SHs) and stock-inverters (SIs) the ability to forecast and reverse fictitious assets and stock values. This study suggests test scenarios regulate different BOTNETs, layered threshold-influenced data security parameters, and DDoS vulnerabilities for stock data integration and validation. In order to study the behavioral entry and exit sites of SHs and SIs, it has integrated three-tiered procedures with threshold-impacted data security criteria and data matrices. Role Management (RM), Remote Level of Command Executions (RLCE), LAN-WAN-LAN Transmission (LWL-T), and Detection of Conceal and Prevention (DoCP) environments are the frameworks of the first layer. The RM, RLCE, LWL-T and DoCP are tuned with threshold-influenced data security parameters which are more influencing stock values. The second layer is framed with Module Management (MM), Command Module (ComM), Contamination Module (ConM), and Stealth Module (SM). The third layer is framed with expected scenarios and threshold of various vulnerabilities, a thread which occurs based on DoS and BOTNETs. All these layers are interconnected together and integrated with behavioral factors of SHs and SIs. The vulnerabilities are tuned with SHs and SIs input data, then filtered with SHs and SIs behavioral matrices, the alerts has been generated according to their existing entries of the data. These influenced threshold metrics tuned through ARIMA and LSTM for future analysis of stock values. The authentication mode has synchronized dual and multi authentication mode of execution, which tuned to cross verify the investors credentials.

Author 1: Santosh Kumar Henge
Author 2: Sanjeev Kumar Mandal
Author 3: Ameya Madhukar Rane
Author 4: Megha Sharma
Author 5: Ravleen Singh
Author 6: S Anka Siva Phani Kumar
Author 7: Anusha Marouthu

Keywords: Robot-network (BOTNET); Module Management (MM); Role Management (RM); Detection Conceal and Prevention (DoCP); LAN-WAN-LAN transmission (LWL-T); Remote Level Command Executions (RLCE); Distributed Denial-of-Service (DDoS)

PDF

Paper 51: Compliance Framework for Personal Data Protection Law Standards

Abstract: Personal data protection laws are crucial for protecting individual privacy in a data-driven world. To this end, the Kingdom of Saudi Arabia has published the Personal Data Protection Law (PDPL), which aims to empower individuals to manage and control their personal information more securely and effectively. However, data management ecosystems that process such data face challenges directly applying PDPL due to difficulties translating legal provisions into a technological context. Furthermore, non-compliance with PDPL can result in financial, legal, and reputational risks. To address these challenges, this paper developed an approach for legal compliance with PDPL through a framework that analyses and translates legal terms into measurable data management standards. The framework guides data management ecosystems in implementing and complying with PDPL requirements and covers all integral parts of data management. To demonstrate the practical application of this approach, a case study utilized two advanced deep learning models, MARBERTv2 and AraELECTRA, to enhance privacy policy adherence in Saudi Arabian websites with PDPL requirements. The results are highly promising, with MARBERTv2 achieving a micro-average F1-score of 93.32% and AraELECTRA delivering solid performance at 92.46%. This underscores the effectiveness of deep learning models in facilitating PDPL compliance.

Author 1: Norah Nasser Alkhamsi
Author 2: Sultan Saud Alqahtani

Keywords: Personal data protection law (PDPL); framework; data management; data protection; privacy policy

PDF

Paper 52: Novel Cognitive Assisted Adaptive Frame Selection for Continuous Sign Language Recognition in Videos Using ConvLSTM

Abstract: People with a hearing impairment commonly use sign language for communication, however, they find it challenging to communicate with a normal person who does not recognise the sign language. They normally require an intermediary human to act as a translator for convenient means of expressing their thoughts. To address this issue, the work aims to enhance their communication capability by eliminating the need for an intermediary person by developing a sign language converter that uses a vision-based dynamic recognition strategy to convert continuous sign language into multimodal output. This work introduces a deep neural network based on convolutional long short-term memory (ConvLSTM) networks to determine the real-time dynamic gesture recognition of the actions of the impaired persons captured through cameras. The investigations of the continuous sign language recognition (CSLR) were deployed on the Chinese Sign Language Dataset, CSL-Daily, Phoenix-2014 and Phoenix-2014T datasets and the performance comparisons were done for conventional LSTM, Gated Recurrent Unit (GRU) and ConvLSTM. Experimental results have shown that the ConvLSTM network outperforms the other techniques, and they can detect the sign actions with a better accuracy of 90%, and a precision rate of 0.93, which ensures interpreting the meanings for each sign sequence with ease by integrating the proposed novel cognitive assisted adaptive keyframe selection. The proposed system could be easily implemented in the modern learning management system.

Author 1: Priyanka Ganesan
Author 2: Senthil Kumar Jagatheesaperumal
Author 3: Matheshkumar P
Author 4: Silvia Gaftandzhieva
Author 5: Rositsa Doneva

Keywords: ConvLSTM; GRU; keyframes; LSTM; sequential learning; sign language recognition

PDF

Paper 53: Microarray Gene Expression Dataset Feature Selection and Classification with Swarm Optimization to Diagnosis Diseases

Abstract: Bioinformatic data concentrated on the accumulation of data pace in the undesired information. Bioinformatics data has vast data-intensive biological information through the computation of data. However, bioinformatics data utilizes statistical methods with gene expression for cancer diagnosis and prognosis. Microarray data provides rough approximations for gene expression analysis. Microarray dataset evaluates the massive gene features presence of sample size and characteristics of microarray data. Hence, it is necessary to evaluate the features in the microarray dataset to exhibit effective outcomes through patterns of gene expression. This paper presented a re-sampling of random probability Swarm Optimization (RRP_SW). With RRP_SW model uses the random re-sampling model estimation of features. The features are evaluated through the computation of a multi-objective optimization model. In the microarray, dataset re-sampling estimated the features in the datasets. The features are samples through the computation of probability values in the datasets for classification. With the RRP_SW model, extreme learning is utilized for the classification of features in the microarray dataset with the benchmark datasets.

Author 1: Peddarapu Rama Krishna
Author 2: Pothuraju Rajarajeswari

Keywords: Feature Selection; classification; gene expression data; Microarray; RRP_SW; hybrid feature selection

PDF

Paper 54: Hidden Markov Model for Cardholder Purchasing Pattern Prediction

Abstract: This study utilizes the Hidden Markov Model to predict cardholder purchasing patterns by monitoring card transaction trends and profiling cardholders based on dominant transactional motivations across four merchant sectors, i.e., service centers, social joints, restaurants, and health facilities. The research addresses shortfalls with existing studies which often disregard credit, prepaid, and debit card transactions outside online transaction channels, primarily focusing only on credit card fraud detection. This research also addresses the challenges of existing prediction algorithms such as support vector machine, decision tree, and naïve Bayes classifiers. The research presents a three-phased Hidden Markov Model implementation starting with initialization, de-coding, and evaluation all executed through a Python script and further validated through a 2-fold cross-validation technique. The study uses an experimental design to systematically investigate cardholder transactional patterns, exposing training and validation data to varied initial and transition state probabilities to optimize prediction outcomes. The results are evaluated through three key metrics, i.e., accuracy, precision, and recall measures, achieving optimal performance of 100% for both accuracy and precision rates, with a 99% on recall rate, thereby outperforming existing predictive algorithms like support vector machine, decision tree, and Naïve Bayes classifiers. This study proves the Hidden Markov Model’s effectiveness in dynamically modeling cardholder behaviors within merchant categories, offering a full understanding of the real motivations behind card transactions. The implication of this research encompasses enhancing merchant growth strategies by empowering card acquirers and issuers with a better approach to optimize their operations and marketing synergies based on a clear understanding of cardholder transactional patterns. Further, the research significantly contributes to consumer behavior analysis and predictive modeling within the card payments ecosystem.

Author 1: Okoth Jeremiah Otieno
Author 2: Michael Kimwele
Author 3: Kennedy Ogada

Keywords: Hidden Markov Model; cardholder transaction patterns; merchant categories; predictive algorithms

PDF

Paper 55: Comparative Analysis of Naïve Bayes Classifier, Support Vector Machine and Decision Tree in Rainfall Classification Using Confusion Matrix

Abstract: The climate in Indonesia is sometimes unstable to this day. This unstable climate change will cause difficulties in predicting rainfall conditions. With unstable climate change, an algorithm is needed that helps the public predict rainfall conditions using rainfall, temperature and humidity parameters. The research process uses daily climate data from the Indonesia Climatology Agency with time span 2018 – 2023. The classification system using the Naïve Bayes Classifier (NBC) algorithm is less able to capture complexity and complex feature interactions with an accuracy of 97%-98%, Support Vector Machine (SVM) has an accuracy of 92%-94% and fewer prediction errors than NBC and Decision Tree which experienced overfitting especially when testing sets with 50% data with an accuracy of 99%-100%. Even though the Decision Tree shows the best performance, there is still a risk of overfitting so, SVM is a stable choice in this research.

Author 1: Elvira Vidya Berliana
Author 2: Mardhani Riasetiawan

Keywords: Naïve Bayes Classifier (NBC); Support Vector Machine (SVM); decision tree; confusion matrix; classification; rainfall; temperature; humidity

PDF

Paper 56: Calibrating Hand Gesture Recognition for Stroke Rehabilitation Internet-of-Things (RIOT) Using MediaPipe in Smart Healthcare Systems

Abstract: Stroke rehabilitation is fraught with challenges, particularly regarding patient mobility, imprecise assessment scoring during the therapy session, and the security of healthcare data shared online. This work aims to address these issues by calibrating hand gesture recognition systems using the Rehabilitation Internet-of-Things (RIOT) framework and examining the effectiveness of machine learning algorithms in conjunction with the MediaPipe framework for gesture recognition calibration. RIOT represents an IoT system developed for the purpose of facilitating remote rehabilitation, with a particular focus on individuals recovering from strokes and residing in geographically distant regions, in addition to healthcare professionals specialising in physical therapy. The Design of Experiment (DoE) methodology allows physiotherapists and researchers to systematically explore the relationship between RIOT and accurate hand gesture recognition using Python's MediaPipe library, by addressing possible factors that may affect the reliability of patients’ scoring results while emphasising data security consideration. To ensure precise rehabilitation assessments, this initiative seeks to enhance accessible home-based stroke rehabilitation by producing optimal and secure calibrated hand gesture recognition with practical recognition techniques. These solutions will be able to benefit both physiotherapists and patients, especially stroke patients who require themselves to be monitored remotely while prioritising security measures within the smart healthcare context.

Author 1: Ahmad Anwar Zainuddin
Author 2: Nurul Hanis Mohd Dhuzuki
Author 3: Asmarani Ahmad Puzi
Author 4: Mohd Naqiuddin Johar
Author 5: Maslina Yazid

Keywords: Internet-of-Things (IoT); RIOT; stroke rehabilitation; calibration; machine learning; MediaPipe; data security; smart healthcare

PDF

Paper 57: Analysis of Learning Algorithms for Predicting Carbon Emissions of Light-Duty Vehicles

Abstract: This research presents a comparative analysis of different learning methods developed for the prediction of carbon emissions from light-duty vehicles. With the growing concern over environmental sustainability, accurate prediction of carbon emissions is vital for developing effective mitigation strategies. The work assesses the performance of various algorithms trained on vehicle-specific data attributes to predict the emission patterns of a fuel type of different light duty models. This work uses two real-time petrol and diesel datasets collected by CariQ app and device. Canada government dataset is also used from the online repository for prediction of the vehicle emission. The evaluation is based on their predictive accuracy. The findings reveal insights into the effectiveness of different learning techniques in accurately estimating carbon emissions from vehicles, providing valuable guidance for policymakers and researchers in the field of environmental sustainability and transportation planning.

Author 1: Rashmi B. Kale
Author 2: Nuzhat Faiz Shaikh

Keywords: Carbon emission; machine learning algorithms; CariQ carbon emission dataset; An Air Quality Index (AQI)

PDF

Paper 58: Metaheuristic Optimization for Dynamic Task Scheduling in Cloud Computing Environments

Abstract: Cloud computing enables the sharing of resources across the Internet in a highly adaptable and quantifiable way. This technology allows users to access customizable distributed resources and offers various services for resource allocation, scientific operations, and service computing via virtualization. Effectively allocating tasks to available resources is essential to providing reliable consumer performance. Task scheduling in cloud computing models presents substantial challenges as it necessitates an efficient scheduler to map multiple tasks from numerous sources and dynamically distribute resources to users based on their requirements. This study presents a metaheuristic optimization methodology that integrates load balancing by dynamically distributing tasks across available resources based on current load conditions. This ensures an even distribution of workloads, preventing resource bottlenecks and enhancing overall system performance. The suggested method is suitable for both constant and variable activities. Our technique was compared with established metaheuristic methods, including HDD-PLB, HG-GSA, and CAAH. The proposed method demonstrated superior performance due to its adaptive load balancing mechanism and efficient resource utilization, reducing task completion times and improving overall system throughput.

Author 1: Longyang Du
Author 2: Qingxuan Wang

Keywords: Dynamic task scheduling; cloud computing; metaheuristic optimization; load balancing; task allocation; resource utilization

PDF

Paper 59: Edge Computing for Real-Time Decision Making in Autonomous Driving: Review of Challenges, Solutions, and Future Trends

Abstract: In the coming half-century, autonomous vehicles will share the roads alongside manually operated automobiles, leading to ongoing interactions between the two categories of vehicles. The advancement of autonomous driving systems has raised the importance of real-time decision-making abilities. Edge computing plays a crucial role in satisfying this requirement by bringing computation and data processing closer to the source, reducing delay, and enhancing the overall efficiency of autonomous vehicles. This paper explores the core principles of edge computing, emphasizing its capability to handle data close to its origin. The study focuses on the issues of network reliability, safety, scalability, and resource management. It offers insights into strategies and technology that effectively handle these challenges. Case studies demonstrate practical implementations and highlight the real-world benefits of edge computing in enhancing decision-making processes for autonomous vehicles. Furthermore, the study outlines upcoming trends and examines emerging technologies such as artificial intelligence, 5G connectivity, and innovative edge computing architectures.

Author 1: Jihong XIE
Author 2: Xiang ZHOU
Author 3: Lu CHENG

Keywords: Edge computing; autonomous driving; real-time decision-making; reliability; resource management

PDF

Paper 60: Group Non-Critical Behavior Recognition Based on Joint Attention Mechanism of Sensor Data and Semantic Domain

Abstract: As science and technology continue to advance, sensor technology is being used in more and more industries. However, traditional methods have the problem of ignoring the semantic information of individual behavior and the correlation between individuals and groups. Based on this, the study proposes a new method for group behavior recognition. The process of feature extraction is performed on group behavior by collecting sensor data and combining a semantic domain joint attention mechanism. This is achieved through the construction of a recognition method based on a data domain and semantic domain joint attention mechanism, which enables the accurate identification of non-critical behaviors in the group. The findings showed that, when the group members are constant, the hybrid network based on a convolutional neural network and bi-directional long and short-term memory network improved the F1 by 0.2% and the accuracy by 0.19%. Moreover, the hybrid network combining graph neural network, bi-directional long and short-term memory network, and convolutional neural network improved results. In group behavior recognition, group relationship modeling based on graph convolutional network improved F1 by 0.17% and accuracy by 0.17% compared to the hybrid network, indicating that group relationship modeling better captures group interaction features and improves recognition. The method is highly effective in the field of group behavior recognition and is expected to provide a new idea for monitoring and managing group behavior in practical scenarios.

Author 1: Chen Li
Author 2: Baoluo Liu

Keywords: Sensor data; attention mechanisms; semantic domains; non-critical; group behavior

PDF

Paper 61: Quantum Cryptology in the Big Data Security Era

Abstract: Quantum cryptography, based on the principles of quantum mechanics, has emerged as a cutting-edge domain for cryptographic applications. A prime example is quantum key distribution, offering a theoretically secure information solution to the key exchange challenge. The inherent strength of quantum cryptography lies in its ability to accomplish cryptographic tasks deemed insurmountable through classical communication alone. This paper explores the landscape of quantum computing in the Big Data Era, drawing parallels with classical methodologies. It illuminates the constraints of current approaches and suggests avenues for progress. By unravelling the intricacies of quantum cryptography and highlighting its deviations from classical counterparts, this study enriches the ongoing discourse on secure communication protocols. The findings underscore the significance of quantum cryptographic methods, fueling further exploration and development in this dynamic and promising field contributing to Data security.

Author 1: Chaymae Majdoubi
Author 2: Saida El Mendili
Author 3: Youssef Gahi

Keywords: Data security; quantum cryptology; big data; cryptography

PDF

Paper 62: Design and Development of a Unified Query Platform as Middleware for NoSQL Data Stores

Abstract: The advancements in technology such as Web 2.0, 3.0, mobile devices and recently IoT devices has given rise to a massive amount of structured, semi-structure and unstructured datasets, i.e. big data. The increasing complexity and diversity of data sources poses significant challenges for stakeholders when extracting meaningful insights. This paper demonstrates how we developed a unified query prototype as middleware using a polyglot technique capable of interrogating and manipulating the four categories of NoSQL data models. This study applied established algorithms to different aspects of the prototype to attain this study’s objective. The prototype was subjected to an experiment where varying query workloads were processed. The performance data comprised of application performance index, memory consumption, and execution time and error rates. The results demonstrated that the prototype had a low error rate indicating it’s robustness and reliability. In addition, the results showed that the prototype is responsive and able to query the underlying storage system effectively and efficiently. The prototype provides a standardize set of operations abstracting the complexities of each underlying storage system; reducing the need for multiple data retrieval management systems.

Author 1: Hadwin Valentine
Author 2: Boniface Kabaso

Keywords: Unified query; polyglot; NoSQL; middleware; query processing; big data

PDF

Paper 63: Underwater Quality Enhancement Based on Mixture Contrast Limited Adaptive Histogram and Multiscale Fusion

Abstract: This paper presents a novel approach for enhancing the visual quality of underwater images using various spatial processing techniques. This research addresses the common issues encountered in underwater imaging, such as color distortion, low clarity, low contrast, bluish or greenish tints caused by light scattering and absorption, and the presence of underwater organisms. To solve these problems, we utilize various image processing methods such as white balancing, Contrast Limited Adaptive Histogram Equalization (CLAHE) in Lab and HSV color spaces, sharpening, weight map generation, and multiscale fusion. The effectiveness of the proposed approach is evaluated quantitatively using mean squared error (MSE), peak signal-to-noise ratio (PSNR), and structural similarity index (SSIM). The results indicate that the optimal CLAHE parameters are a block size 4x4 and a clip limit 1.2. These parameters yielded an MSE value of 0.7594, a PSNR value of 20.7121, and an SSIM value of 0.8826, demonstrating superior performance compared to previous research. A qualitative evaluation was also conducted using eight respondents based on overall visual quality, color fidelity, and contrast enhancement. The assessment results demonstrate satisfactory outcomes, with a mean score of 4.3278 and a standard deviation of 0.7238. Overall, this research demonstrates that effective and efficient enhancement of underwater image quality through computational methods can be achieved using simple techniques with appropriate parameters and placement, thereby enabling better scientific research and exploration of the underwater world.

Author 1: Septa Cahyani
Author 2: Anny Kartika Sari
Author 3: Agus Harjoko

Keywords: CLAHE; Color space enhancement; luminance; sharpening

PDF

Paper 64: A Predictive Model for Software Cost Estimation Using ARIMA Algorithm

Abstract: Technology is a differentiator in business today. It plays a different and decisive role by providing programs that contribute to this. To build this software while avoiding risks during the implementation and construction process, it is necessary to estimate the cost. The cost estimation process is the process of estimating the effort, time, and resources needed to build a software project. It is a crucial process as it provides good planning during the construction and implementation process and reduces the risks you may be exposed to. Therefore, previous studies sought to build models and methods to estimate this, but they were not accurate enough to complete the process. Therefore, this study seeks to build a model using the Autoregressive integrated moving average (ARIMA) algorithm. Five datasets the COCOMO81, COCOMONasaV1, COCOMONasaV2, Desharnais, and China were used. The dataset was processed to remove noise and missing values, visualized to understand it, and linked using a time series to predict the future values of the data. It will then be trained on the ARIMA algorithm. To ensure the effectiveness and efficiency of the model for use, four famous evaluation criteria were used: mean magnitude of relative error (MMRE), root mean square error (RMSE), mean magnitude of relative error (MdMRE), and prediction accuracy (PRED). This experiment showed impressive software cost estimation results, with MMRE, RMSE, MdMRE, and PRED results being 0.07613, 0.04999, 0.03813, and 95% for the COCOMO81 dataset, respectively. The results were high for the COCOMONasaV1 dataset, reaching 0.02227, 0.02899, 0.01113, and 97.1%. The COCOMONasaV2 results were 0.01035, 0.00650, 0.00517, and 99.35%, respectively. The China dataset showed good prediction results of 0.00001, 0.00430, 0.00008, and 99.57%, respectively. The results were impressive and promising for the Desharnais dataset, showing 0.00004, 0.0039, 0.00002, and 99.6%. The results of this study are promising and distinctive compared to recent studies, and they also contribute to good business planning and risk reduction.

Author 1: Moatasem M. Draz
Author 2: Osama Emam
Author 3: Safaa M. Azzam

Keywords: Software cost estimation; software effort estimation; promise repository; SCE; ARIMA

PDF

Paper 65: Defense Mechanisms for Vehicular Networks: Deep Learning Approaches for Detecting DDoS Attacks

Abstract: Vehicular Ad-hoc Networks (VANETs) are engineered to meet the distinctive demands of vehicular communication, facilitating interactions between vehicles and roadside infrastructure to enhance road safety, traffic efficiency, and diverse applications such as traffic management and infotainment services. However, the looming threat of Distributed Denial of Service (DDoS) attacks in VANETs poses a significant challenge, potentially disrupting critical services and compromising user safety. To address this challenge, this study proposes a novel deep learning (DL)-based model that integrates Long Short-Term Memory (LSTM) architecture with self-attention mechanisms to effectively detect DDoS attacks in VANETs. By incorporating autoencoders for feature extraction, the model leverages the sequential nature of VANET data, prioritizing relevant information within input sequences to accurately identify malicious activities. With an impressive accuracy of 98.39%, precision of 97.79%, recall of 98.00%, and F1-score of 98.20%, the proposed approach demonstrates remarkable efficacy in safeguarding VANETs against cyber threats, thereby contributing to enhanced road safety and network reliability.

Author 1: Lekshmi V
Author 2: R. Suji Pramila
Author 3: Tibbie Pon Symon V A

Keywords: Vehicular Ad-hoc Networks; Denial of Service attacks; deep learning; auto encoder; Long Short-Term Memory; self-attention mechanism; cyber threats; network reliability

PDF

Paper 66: Towards Dimension Reduction: A Balanced Relative Discrimination Feature Ranking Technique for Efficient Text Classification (BRDC)

Abstract: The volume and complexity of textual data have significantly increased worldwide, demanding a comprehensive understanding of machine learning techniques for accurate text classification in various applications. In recent years, there has been significant growth in natural language processing (NLP) and neural networks (NNs). Deep learning (DL) models have outperformed classical machine learning approaches in text classification tasks, such as sentiment analysis, news categorization, question answering, and natural language inference. Dimension reduction is crucial for refining the classifier performance and decreasing the computational cost of text classification. Existing methodologies, such as the Improved Relative Discrimination Criterion (IRDC) and the Relative Discrimination Criterion (RDC), exhibit deficiencies in proper normalization and are not well-balanced regarding distinct class's term ranking. This study introduced an improved feature-ranking metric called the Balanced Relative Discrimination Criterion (BRDC). This study measured document frequencies into term-count estimations, facilitating a normalized and balanced classification approach. The proposed methodology demonstrated superior performance compared to existing techniques. Experiments were conducted to evaluate the efficacy of the proposed techniques using Decision Tree (DT), Logistic Regression (LR), Multinomial Naïve Bayes (MNB), and Long Short-Term Memory (LSTM) models on three benchmark datasets: Reuters-21578, 20newsgroup, and AG News. The findings indicate that LSTM outperformed the other models and can be applied in conjunction with the proposed BRDC approach.

Author 1: Muhammad Nasir
Author 2: Noor Azah Samsudin
Author 3: Wareesa Sharif
Author 4: Souad Baowidan
Author 5: Dr. Humaira Arshad
Author 6: Muhammad Faheem Mushtaq

Keywords: Text classification; balanced relative discrimination criterion; dimension reduction; feature ranking; deep learning; machine learning

PDF

Paper 67: Reading Recommendation Technology in Digital Libraries Based on Readers' Social Relationships and Readers' Interests

Abstract: In recent years, the construction of digital libraries has contributed to the advancement of smart lending services. The challenge of suggesting appropriate books for readers from a vast collection of books remains a primary obstacle in the current construction of digital libraries. A fusion method for recommending content to readers with diverse interests is proposed. The method initially extracts short-term borrowing behavior characteristics and simultaneously considers the social similarity characteristics of readers, resulting in the recommendation of content through target ranking search. Aiming to cater to long-term readers, a reading recommendation method that integrates readers' reading behaviors is proposed to model readers' interests through the attention mechanism. It constructs readers' preference models by using synergistic metrics, and finally achieves content recommendation through preference fusion. The proposed model attained the swiftest convergence and the minimum logarithmic loss of 1.85 in recommending readings for multi-interest readers. Additionally, the accuracy of the proposed model in recommending science reading scenarios was 97.24%, surpassing other models. In the reading recommendation experiments for extended borrowings, the suggested model demonstrated superior performance with regard to recall and precision, which were 0.198 and 0.062, respectively. Lastly, after comparing the recommendation errors of different reading models, the proposed model exhibited a root-mean-square error and an average absolute error of 0.731 and 0.721, respectively. These results denote the most precise recommendation accuracy among the three models. The proposed model demonstrates excellent recommendation effectiveness in real-world reading recommendation scenarios. This research offers significant technical references for the advancement of related recommendation technology and the development of digital libraries.

Author 1: Weiying Zheng

Keywords: Digital library; recommend; behavioral characteristics; interest; attention mechanism

PDF

Paper 68: A Computer Vision-Based Pill Recognition Application: Bridging Gaps in Medication Understanding for the Elderly

Abstract: Identifying prescribed medication accurately remains a challenge for many people, particularly older individuals who may experience medication errors due to impaired vision, lack of English proficiency, or other disabilities. This problem is more prevalent in healthcare settings where pills are often distributed in strips rather than in traditional packaging, increasing the risk of dangerous consequences. To address this issue, a mobile application has been developed using Computer Vision and Artificial Intelligence to accurately recognize pills and provide relevant information through text and speech formats. The approach integrates the GPT-4 API for imprint extraction and YOLOv8 for image detection, significantly enhancing the application's accuracy. The goal is to improve medication management for vulnerable populations facing unique accessibility challenges. The application has achieved an overall accuracy of 90.89%, demonstrating its effectiveness in assisting users to identify and manage their medication.

Author 1: Taif Alahmadi
Author 2: Rana Alsaedi
Author 3: Ameera Alfadli
Author 4: Ohoud Alzubaidi
Author 5: Afnan Aldhahri

Keywords: Pill detection; seniors; computer vision; artificial intelligence

PDF

Paper 69: Educational Enhancement Through Augmented Reality Simulation: A Bibliometric Analysis

Abstract: Augmented Reality (AR) has become a key technology in the education sector, offering interactive learning experiences that improve student engagement and understanding. Despite its increasing use, a thorough summary of AR research in educational environments is still required. This study applies bibliometric analysis to identify trends in this research field. Data from the Scopus database and VOSviewer software version 1.6.19 was used to analyze academic publications from 2018 to 2023. The original dataset of 4858 articles was narrowed down to 1109 articles concentrating on "augmented reality" AND "simulation" in student learning. Methods such as advanced data mining, co-citation analysis, and network visualization were utilized to outline the structure and trends in this research area. Key findings include a significant rise in research activity over the past decade, identification of the ten most prolific authors in AR simulation studies, and detailed visualizations of information distribution. Significant challenges include high costs and difficulties in technical integration. The study addresses these issues through interdisciplinary research that combines educational theory with AR technology. Results demonstrate growing interest in AR applications, particularly within STEM education, driven by technological advancements and increased funding. Despite these challenges, the potential of AR to enhance learning outcomes is clear. This research concludes that AR simulations can be a valuable educational tool, with further studies needed to explore the scalability of AR applications in various educational settings and to develop evidence-based guidelines for effective integration.

Author 1: Zuhaili Mohd Arshad
Author 2: Mohamed Nor Azhari Azman
Author 3: Olzhas Kenzhaliyev
Author 4: Farid R. Kassimov

Keywords: Augmented reality; simulation; learning; education

PDF

Paper 70: Precision Construction of Salary Prediction System Based on Deep Neural Network

Abstract: Currently, most recruitment websites use keyword search or job nature classification to filter the salary information that job seekers are most concerned about. Job seekers need to spend much time and effort to understand the salary range of their desired position. In order to help job seekers quickly and accurately understand the salary of their desired position and market value, Word2vec model and latent Dirichlet allocation model are used to obtain topic features, which are used as the basis for the salary prediction model. The study uses deep neural networks and adaptive moment estimation algorithms to construct the salary prediction model. Based on the constructed salary prediction model, the final salary prediction system is constructed based on a browser/server model. The results showed that on the training set, the maximum accuracy of the salary prediction model was 96.71%, the minimum was 93.75%, and the average was 95.07%. The mean absolute percentage error and mean square error of this model were 5.661% and 0.3462, respectively. The maximum average response time of the salary prediction system was 134.2s, the minimum was 2.02s, and the maximum throughput was 1500000byte/s. The salary prediction model has good performance, which can provide technical support for salary prediction.

Author 1: Yuping Wang
Author 2: MingYan Bai
Author 3: Changjiang Liao

Keywords: Deep neural network; Adam; salary; prediction; system

PDF

Paper 71: Development and Research of a Method for Multi-Level Protection of Transmitted Information in IP Networks Based on Asterisk IP PBX Using Various Codecs

Abstract: Research indicates that the utilization of existing symmetric and asymmetric cryptosystems, as well as steganography, fails to ensure the requisite security and reliability in IP networks, where IP PBX Asterisk assumes the role of information transmission facilitator through switching processes. Consequently, this publication undertakes the development and investigation of a four-tiered information protection method when employing various voice codecs in IP networks based on IP PBX Asterisk. The adoption of multi-tiered protection significantly prolongs the cryptanalysis duration for malicious actors, thereby serving as a deterrent to information interception. The primary achievement of this research lies in minimizing the latency incurred during information traversal across the four layers of protection to less than 150 milliseconds, a benchmark widely acknowledged as optimal for assessing voice traffic service quality during transmission. It is noteworthy that a delay parameter of 150 milliseconds in telecommunications networks is pivotal; failure to meet this criterion at the receiving end may result in signal distortion such as jitter, audio degradation, unintelligibility, and other impairments. The devised methodology can be employed in networks transmitting highly classified or business-sensitive information. We contend that the developed encryption enhancement methodology, which prolongs the cryptanalysis duration for malicious entities and the conducted analysis, represents a novel scientific contribution.

Author 1: Mubarak Yakubova
Author 2: Tansaule Serikov
Author 3: Olga Manankova

Keywords: Asterisk PBX; IP telephony systems; codecs; data security; Python

PDF

Paper 72: Unmanned Aerial Vehicles Following Photography Path Planning Technology Based on Kinematic and Adaptive Models

Abstract: As a representative invention of modern intelligent technology, unmanned aerial vehicles are receiving more and more attention in various fields. However, unmanned aerial vehicles cannot autonomously track path planning based on dynamic changes in conventional path planning. To address the aforementioned issues, this study proposes a path-planning algorithm for unmanned aerial vehicles following photography based on kinematic and adaptive models. A global coordinate system and an aircraft coordinate system are constructed based on the motion relationship between the unmanned aerial vehicles and the tracking target, and the two are converted into a horizontal projection coordinate system to digitize the observed data. On this basis, an adaptive control model is established based on the circular tracking path planning algorithm, and finally, simulation experiments and practical application tests are conducted in combination with the unmanned aerial vehicles following and shooting planning algorithm. The results showed that the best fitness of the proposed algorithm compared with the other two algorithms was 97.56, 93.87, and 92.79, and the path time and average speed of the studied algorithm were 38s and 3.4m/s, which were better than the other two algorithms. In the real machine experiment, there were six circular paths planned by the research algorithm, and the relative distance between the unmanned aerial vehicles and the target was within the range of 200m-600m. The actual trajectory had a high degree of overlap with the model planned trajectory. Research has shown that the proposed algorithm not only stabilizes the illumination angle within an effective range in path planning, but also has high convergence and superior path planning performance in practical applications.

Author 1: Sa Xiao

Keywords: Kinematic model; adaptive control; unmanned aerial vehicles; path planning; follow photography

PDF

Paper 73: Decoding Visual Question Answering Methodologies: Unveiling Applications in Multimodal Learning Frameworks

Abstract: This research investigates the intricacies of Visual Question Answering (VQA) methodologies and their applications within Multimodal Learning Frameworks. Our approach, founded on the synergy of Multimodal Compact Bilinear Pooling (MCB) and Neural Module Networks (NMN), offers a comprehensive understanding of visual and textual elements. Notably, the model excels in responding to Descriptive questions with an accuracy of 88%, showcasing a nuanced grasp of detailed inquiries. Factual questions follow closely with an 86% accuracy, while Inferential questions exhibit commendable performance at 82%. Precision scores reinforce the model's reliability, registering 85% for Descriptive, 82% for Factual, and 78% for inferential questions. Robust recall scores further emphasize the model's ability to retrieve relevant information across question types. The F1 Score, reflecting a harmonious blend of precision and recall, attests to the model's strong overall performance: 87% for Descriptive, 84% for Factual, and 80% for inferential questions. Visualizations through boxplots and violin plots affirm the model's consistency in accuracy and precision across question types. Future directions encompass dataset expansion, integration of transfer learning, attention mechanisms for interpretability, and exploration of broader multimodal applications beyond VQA. This research establishes a resilient framework for advancing VQA methodologies, paving the way for enhanced multimodal learning in diverse contexts.

Author 1: Y Harika Devi
Author 2: G Ramu

Keywords: Visual Question Answering (VQA); Multimodal Learning; Neural Module Networks (NMN); Multimodal Compact Bilinear Pooling (MCB); question types; F1 score

PDF

Paper 74: Effective Feature Extraction Using Residual Attention and Local Context Aware Classifier for Crop Yield Prediction

Abstract: Crop yield forecasting plays a key role in agricultural management and planning which is highly essential for food security and production in regional to global scales. However, a prediction of crop yield is considered a challenging task due to the difficulty in extracting spatial context and local semantic features, and difficulty in handling spatiotemporal relations. In order to address these issues, a comprehensive feature extraction is developed along with an effective deep-learning classifier. In this paper, the Residual Attention and Local Context Aware Classifier (RALCAC) is developed for obtaining appropriate features from the remote sensing crop yield images. The developed RALCAC helps to obtain the spatial context using Residual Attention (RA) module and local semantic information that are beneficial in understanding the detailed depiction of the crop. Further, the Convolutional Long Short Term Memory (ConvLSTM) is used to obtain the prediction of crop yield using the comprehensive features from the RALCAC. The RALCAC is analysed by means of Root Mean Squared Error (RMSE) and coefficient of determination. The existing research such as DeepYield, SSTNN and 3DCNN are used to compare the RALCAC method. The RMSE of RALCAC for the MODIS dataset is 3.257, and it is lesser when compared to the DeepYield.

Author 1: Vinaykumar Vajjanakurike Nagaraju
Author 2: Ananda Babu Jayachandra
Author 3: Balaji Prabhu Baluvaneralu Veeranna
Author 4: Ravi Prakash Madenur Lingaraju

Keywords: Convolutional long short term memory; crop yield prediction; residual attention and local context-aware network; root mean squared error; spatial context data

PDF

Paper 75: Business Insights into the Internet of Things: User Experiences and Organizational Strategies

Abstract: The Internet of Things (IoT) has revolutionized business operations across industries by integrating physical devices into digital networks. This study discusses the extensive business literature, particularly the impact of IoT from the perspective of users and organizations. This paper provides a comprehensive analysis of the effects, challenges, and opportunities of IoT in the business domain by integrating various perspectives and insights. We analyze trends in IoT adoption and explore the conditions promoting its widespread use in different industries and regions. The research investigates user perspectives, such as acceptance, user experience, and the ethics of the IoT. This paper focuses on how IoT will lead to new business models and the implications for strategy, operations, and client relationships. It critically reviews challenges, such as security vulnerabilities, compatibility challenges, and legal frameworks that currently restrict effortless integration of IoT in the industry from a business standpoint. Finally, we provide recommendations for further research.

Author 1: Yang WEI

Keywords: Internet of Things; business literature; user perspectives; organizational impact; adoption trends; data-driven strategies

PDF

Paper 76: SocialBullyAlert: A Web Application for Cyberbullying Detection on Minors' Social Media

Abstract: The severe problem of cyberbullying towards minors is addressed, which has been shown to have significant impacts on the mental and emotional health of children and adolescents. Subsequently, the effectiveness of existing artificial intelligence models and neural networks in detecting cyberbullying on social media is analyzed. In response, a web platform is developed whose contribution is to identify offensive content, adapt to various slangs and idioms, and offer an intuitive interface with high usability in terms of user experience (UX) and user interface (UI) design. The application was validated with cyberbullying experts (teachers, principals, and psychologists), and the UI/UX design was also validated with users (parents). Limitations and future challenges are discussed, including varying cyberbullying regulations, the need for constant updates, and adapting to multiple languages and cultural contexts. This highlights the importance of ongoing research to enhance parental control tools in digital environments.

Author 1: Elizabeth Adriana Nina-Gutiérrez
Author 2: Jesús Emerson Pacheco-Alanya
Author 3: Juan Carlos Morales-Arevalo

Keywords: Cyberbullying; artificial intelligence (AI); neural networks; parental control; social media; offensive content detection; User Experience (UX); User Interface (UI); mental health

PDF

Paper 77: Explainable Artificial Intelligence for Urban Planning: Challenges, Solutions, and Future Trends from a New Perspective

Abstract: Integrating Artificial Intelligence (AI) into urban planning transforms resource allocation and sustainable development. Nevertheless, the lack of transparency in some AI models raises questions about accountability and public trust. This paper investigates the role of Explainable AI (XAI) in urban planning, focusing on its ability to improve transparency and build trust between stakeholders. The study comprehensively examines approaches to achieving explainability, encompassing rule-based systems and interpretable machine learning models. Case studies illustrate the effective application of XAI in practical urban planning situations and highlight the critical role of transparency in the decision-making flow. This study examines the barriers that hinder the smooth integration of XAI into urban planning methodologies. These challenges include ethical concerns, the complexity of the models used, and the need for explanations tailored to specific areas.

Author 1: Shan TONG
Author 2: Shaokang LI

Keywords: Explainable artificial intelligence; urban planning; rule-based systems; machine learning

PDF

Paper 78: Enhanced Harris Hawks Optimization Algorithm for SLA-Aware Task Scheduling in Cloud Computing

Abstract: Cloud computing has revolutionized how Software as a Service (SaaS) suppliers deliver applications by leasing shareable resources from Infrastructure as a Service (IaaS) suppliers. However, meeting users' Quality of Service (QoS) parameters while maximizing profits from the cloud infrastructure presents a significant challenge. This study addresses this challenge by proposing an Enhanced Harris Hawks Optimization (EHHO) algorithm for cloud task scheduling, specifically designed to satisfy Service Level Agreements (SLAs), meet users QoS requirements, and enhance resource utilization efficiency. Drawing inspiration from Harris's falcon hunting habits in nature, the basic HHO algorithm has shown promise in finding optimal solutions to specific problems. However, it often suffers from convergence to local optima, impairing solution quality. To mitigate this issue, our study enhances the HHO algorithm by introducing an exploration factor that optimizes parameters and improves its exploration capabilities. The proposed EHHO algorithm is assessed against established optimization algorithms, including Genetic Algorithm (GA), Ant Colony Optimization (ACO), and Particle Swarm Optimization (PSO). The results demonstrate that our method significantly improves the makespan for GA, ACO, and PSO by 19.2%, 17.1%, and 20.4%, respectively, while also achieving improvements of 17.1%, 17.3%, and 17.2% for BigDataBench workloads. Furthermore, our EHHO algorithm exhibits a substantial reduction in SLA violations compared to PSO, ACO, and GA, achieving improvements of 55.2%, 41.4%, and 33.6%, respectively, for general workloads, and 61.9%, 23.1%, and 52.7%, respectively, for BigDataBench workloads.

Author 1: Junhua Liu
Author 2: Chaoyang Lei
Author 3: Gen Yin

Keywords: Cloud computing; scheduling; optimization; SLA; SaaS

PDF

Paper 79: Optimization of a Hybrid Renewable Energy System Based on Meta-Heuristic Optimization Algorithms

Abstract: Islands represent strategic platforms for exploring and exploiting marine resources. This article presents a hybrid renewable electric system (HRES) designed to power the island communities of Djerba in Tunisia. The system integrates photovoltaic panels, wind turbines, tidal turbines, hydraulic systems, biomass, and batteries, taking into account available climatic and land resources. A multi-objective optimization method is proposed for sizing this system to minimize power loss and energy costs. Two optimization algorithms, MOPSO (Multi-Objective Particle Swarm Optimization) and SSO (Social Spider Optimization) have been used to solve this problem. MATLAB simulations show that MOPSO offers better convergence and coverage than SSO. The results confirm the viability of the proposed algorithm and method for optimal sizing. In addition, they enable an in-depth analysis of the electrical production and economic benefits associated with the various system components.

Author 1: Ramia Ouederni
Author 2: Bechir Bouaziz
Author 3: Faouzi Bacha

Keywords: Hybrid renewable energy system; techno-economic optimzation; optimal sizing; MOPSO; SSO

PDF

Paper 80: Pilot Study on Consumer Preference, Intentions and Trust on Purchasing-Pattern for Online Virtual Shops

Abstract: User behaviour about an item is a choice predicated on their perception of the item in order to satisfy the intent of such a purchase pattern/choice as made. With virtual stores to improve consumer coverage, monetization and ease of product delivery, users' trust is lowered with the non-delivery of advertised products as items purchased are often replaced with new/similar products. To resolve the issues of lowered consumer trust and preference for products purchased via online shops – each transaction reflects a user buying behaviour. This, if harnessed – will aid businesses to reshape their inventory to handle various challenges arising from feature evolution, feature drift, product replacement, and concept evolution. Our study seeks to resolve these issues via a Bayesian network with trust, preference and intent as features of the virtual store to investigate their effectiveness in the design and usefulness to promote e-commerce in Nigeria. Data consists of 8,693 records collected via Google Play Scraper Library for Jumia as retrieved from over 586 respondents. Expert evaluation ranked the design choice in the use of the parameters as high.

Author 1: Sebastina Nkechi Okofu
Author 2: Kizito Eluemunor Anazia
Author 3: Maureen Ifeanyi Akazue
Author 4: Margaret Dumebi Okpor
Author 5: Amanda Enadona Oweimieto
Author 6: Clive Ebomagune Asuai
Author 7: Geoffrey Augustine Nwokolo
Author 8: Arnold Adimabua Ojugo
Author 9: Emmanuel Obiajulu Ojei

Keywords: Consumer preference; consumer trust; purchasing-pattern; purchase intentions; online virtual shops

PDF

Paper 81: Research on the Path of Enhancing Employment and Entrepreneurship Ability of Deaf College Students Based on Knowledge Graph

Abstract: Enhancing employment capabilities and selecting suitable career paths are crucial for deaf university students. The advancement of knowledge graph technology has opened up technical possibilities for career decision-making among these students. This paper calculates user preferences and introduces an exponential decay function integrated with a time factor to accurately reflect the dynamic changes in user interest preferences over time. Leveraging knowledge graphs for personalized recommendations, the study proposes recommending necessary skills to enhance employment and entrepreneurial capabilities among students. Additionally, it employs knowledge graphs to suggest more suitable career paths for deaf university students. Finally, through empirical validation, the paper demonstrates the effectiveness of the proposed hybrid clustering and interest-based collaborative filtering recommendation algorithm.

Author 1: Pengyu Liu

Keywords: Knowledge graph; hearing impaired college students; employment and entrepreneurial ability; interest matching; feature extraction

PDF

Paper 82: Data Sensitivity Preservation-Securing Value Using Varied Differential Privacy Method (SP-SV Method)

Abstract: Numerous governmental entities, including hospitals and the Bureau of Statistics, as well as other functional units, have shown great interest in personalized privacy. Numerous models and techniques for data posting have been put forward, the majority of which concentrated on a single sensitive property. A few scholarly articles highlighted the need to protect the privacy of data which includes many sensitive qualities. Utilizing current techniques like the sanctity of privacy in data gets decreased if many sensitive values are published while maintaining k-anonymity and l-diversity simultaneously. Furthermore, customization hasn't been investigated in this context. We describe a publishing strategy in this research that handles customization when publishing material that has many sensitive features for analysis. The model makes use of a slicing strategy that is reinforced by fuzzy approaches for numerical sensitive characteristics based on variety, generalization of categorical sensitive attributes, and probabilistic anonymization of quasi-identifiers using differential privacy. We limit the confidence that an adversary may draw about a sensitive value in a publicly available data collection to the level of understanding as an inference drawn from known information. Both artificial datasets based on real-life healthcare data were used in the trials. The outcomes guarantee that the data value is maintained while securing individual’s privacy.

Author 1: Supriya G Purohit
Author 2: Veeragangadhara Swamy

Keywords: Big data; privacy preservation; security; data publish; data privacy

PDF

Paper 83: Precision Farming with AI: An Integrated Deep Learning Solution for Paddy Leaf Disease Monitoring

Abstract: Paddy rice, an essential food source for millions, is highly susceptible to various leaf diseases that threaten its yield and quality. This study introduces a cutting-edge hybrid deep learning model designed to address the critical need for accurate and timely identification and classification of paddy leaf diseases. Traditional methods often lack the precision and efficiency required for effective disease detection, necessitating the development of more sophisticated approaches. Our proposed model leverages the feature extraction capabilities of EfficientNetB0 and the hierarchical relationship capturing abilities of the Capsule Network, resulting in superior disease classification performance. The hybrid model demonstrates outstanding accuracy, achieving 97.86%, along with precision, recall, and F1-scores of 97.98%, 98.01%, and 97.99%, respectively. It effectively differentiates between diseases such as Narrow Brown Spot, Bacterial Leaf Blight, Leaf Blast, Leaf Scald, Brown Spot, and healthy leaves, showcasing its robustness in practical applications. This research highlights the importance of advanced technological interventions in agriculture, providing a scalable and efficient solution for disease detection in paddy crops. The hybrid deep learning model offers significant benefits to farmers and agricultural stakeholders, facilitating timely disease management, optimizing resource use, and improving crop management practices. Ultimately, this innovation supports agricultural sustainability and enhances global food security.

Author 1: Pramod K
Author 2: V. R. Nagarajan

Keywords: Paddy rice; leaf diseases; hybrid deep learning; efficientnetb0; capsule network

PDF

Paper 84: Brain and Heart Rate Variability Patterns Recognition for Depression Classification of Mental Health Disorder

Abstract: Depression is common and dangerous if untreated. We must detect depression patterns early and accurately to provide timely interventions and assistance. We present a novel depression prediction method (depressive-deep), which combines preprocess brain electroencephalogram (EEG) and ECG-based heart-rate variability (HRV) signals into a 2D scalogram. Later, we extracted features from 2D scalogram images using a fine-tuned MobileNetV2 deep learning (DL) architecture. We integrated an AdaBoost ensemble learning algorithm to improve the model’s performance. Our study suggested ensemble learning can accurately predict asymmetric and symmetric depression patterns from multimodal signals such as EEG and ECG. These patterns include major depressive state (MDS), cognitive and emotional arousal (CEA), mood disorder patterns (MDPs), mood and emotional regulation (MER), and stress and emotional dysregulation (SED). To develop this depressive-deep model, we have performed a pre-trained strategy on two publicly available datasets, MODMA and SWEEL-KW. The sensitivity (SE), specificity (SP), accuracy (ACC), F1-score, precision (P), Matthew’s correlation coefficient (MCC), and area under the curve (AUC) have been analyzed to determine the best depression prediction model. Moreover, we used wearable devices over the Internet of Medical Things (IoMT) to extract signals and check the depressive-deep system’s generalizability. To ensure model robustness, we use several assessment criteria, including cross-validation. The depressive-deep and feature extraction strategies outperformed compared to the other methods in depression prediction, obtaining an ACC of 0.96, IOTSE of 0.98, SP of 0.95, P of 0.95, F1-score of 0.96, and MCC of 0.96. The main findings suggest that using 2D scalogram and depressive-deep (fine-tuning of MobileNet2 + AdaBoost) algorithms outperform them in detecting early depression, improving mental health diagnosis and treatment.

Author 1: Qaisar Abbas
Author 2: M. Emre Celebi
Author 3: Talal AlBalawi
Author 4: Yassine Daadaa

Keywords: Mental health disorder; depression patterns; electroencephalogram; heart rate variability; deep learning; mobilenet; behavioral analysis; internet of medical things

PDF

Paper 85: A Systematic Review on Assessment in Adaptive Learning: Theories, Algorithms and Techniques

Abstract: Computerized knowledge assessments have become increasingly popular, especially since COVID-19 has transformed assessment practices from both technological and pedagogical standpoints. This systematic review of the literature aims to analyze studies concerning the integration of adaptive assessment techniques and algorithms in Learning Management Systems (LMS) to generate a global vision of their potential to enhance the quality and adaptability of learning, and to provide recommendations for their application. A review of international indexed databases, specifically Scopus, was conducted, focusing on studies published between 2000 and 2024. The PICO framework was used to formulate the search query and the PRISMA (Preferred Reporting Items for Systematic Reviews and Meta-Analyses) framework to select 66 relevant studies based on inclusion and exclusion criteria such as publishing year, document type, subject area, language, and other factors. The results reveal that integrating adaptive assessments positively impacts the quality of learning by generating short tests dynamically adapted to students’ skills, learning styles, and behaviors. Furthermore, the findings identify various techniques and algorithms used, as well as their main features and benefits. These tools tailor adaptive learning programs to meet students’ specific needs, preferences, and proficiency levels, thereby enhancing student motivation and enabling them to engage with material that matches their knowledge and abilities. In conclusion, the systematic review emphasizes the significance of integrating adaptive assessments in educational environments and offers tailored recommendations for their implementation to provide adaptive learning. These recommendations can be adopted and reused as guidelines to develop new and more sophisticated assessment models.

Author 1: Adel Ihichr
Author 2: Omar Oustous
Author 3: Younes El Bouzekri El Idrissi
Author 4: Ayoub Ait Lahcen

Keywords: Adaptive assessment; adaptive learning; test; education; techniques

PDF

Paper 86: Implementation of Slicing Aided Hyper Inference (SAHI) in YOLOv8 to Counting Oil Palm Trees Using High-Resolution Aerial Imagery Data

Abstract: Palm oil is a commodity that contributes significantly to Indonesia's national economic growth, with a total plantation area of 116,000 hectares. By 2023, Indonesia is projected to produce approximately 47 million metric tons of palm oil. One of the major challenges in the manual counting of oil palm trees in a large area of a plantation is the labour-intensive, time-consuming, costly, and dangerous nature of the work in the field. The use of aerial imagery allows for the mapping of large areas with comprehensive data coverage. This study proposes a method of mapping oil palm plantations for the counting of oil palm trees using high-resolution aerial images taken with drones. Furthermore, the use of artificial intelligence (AI) methods and deep learning (DL) with the You Only Look Once (YOLO) model for object detection has demonstrated good accuracy in previous studies. This research will utilize the YOLOv8m object detection model and the slicing method, namely Slicing Hyper Aided Hyper Inference (SAHI), which is anticipated to enhance the precision of object detection models on high-resolution aerial imagery. The study concluded that the use of the SAHI slicing method can significantly enhance the accuracy of the model, as evidenced by a Mean Absolute Percentage Error (MAPE) value of 0.01758 on aerial imagery equivalent to 73.2 hectares, with a detection time of 5 minutes and 45 seconds.

Author 1: Naufal Najiv Zhorif
Author 2: Rahmat Kenzie Anandyto
Author 3: Albrizy Ullaya Rusyadi
Author 4: Edy Irwansyah

Keywords: Oil palm tree; YOLOv8; SAHI; aerial imagery; tree counting

PDF

Paper 87: Enhancing English Learning Environments Through Real-Time Emotion Detection and Sentiment Analysis

Abstract: Educational technology is increasingly focusing on real-time language learning. Prior studies have utilized Natural Language Processing (NLP) to assess students' classroom behavior by analyzing their reported feelings and thoughts. However, these studies have not fully enhanced the feedback provided to instructors and peers. This research addresses this issue by combining two innovative technologies: Federated 3D-Convolutional Neural Networks (Fed 3D-CNN) and Long Short-Term Memory (LSTM) networks and also aims to investigate classroom attitudes to enhance students' language competence. These technologies enable the modification of teaching strategies through text analysis and image recognition, providing comprehensive feedback on student interactions. For this study, the Multimodal Emotion Lines Dataset (MELD) and eNTERFACE'05 datasets were selected. eNTERFACE contains 3D images of individuals, while MELD analyzes spoken patterns. To address over fitting issues, the SMOTE technique is used to balance the dataset through oversampling and under sampling. The study accurately predicts human emotions using Federated 3D-CNN technology, which excels in image processing by predicting personal information from various angles. Federated Learning with 3D-CNNs allows simultaneous implementation for multiple clients by leveraging both local and global weight changes. The NLP system identifies emotional language patterns in students, laying the foundation for this analysis. Although not all student feedback has been extensively studied in the literature, the Fed 3D-CNN and LSTM algorithm recommendations are valuable for extracting feedback-related information from audio and video. The proposed framework achieves a prediction accuracy of 97.72%, outperforming existing methods. This study aims to investigate classroom attitudes to enhance students' language competence.

Author 1: Myagmarsuren Orosoo
Author 2: Yaisna Rajkumari
Author 3: Komminni Ramesh
Author 4: Gulnaz Fatma
Author 5: M. Nagabhaskar
Author 6: Adapa Gopi
Author 7: Manikandan Rengarajan

Keywords: Convolutional neural network; federated learning; LSTM; Natural Language Processing; SMOTE

PDF

Paper 88: Analysing Code-Mixed Text in Programming Instruction Through Machine Learning for Feature Extraction

Abstract: In programming education, code-mixed text using multiple languages or dialects simultaneously can significantly hinder learning outcomes due to misinterpretation and inadequate processing by traditional systems. For instance, students with bilingual or multilingual backgrounds may face difficulties with automated code reviews or multilingual coding tutorials if their code-mixed queries are not accurately understood. Motivated by these challenges, this paper proposes a Federated Bi-LSTM Model for Feature Extraction and Classification. This model leverages Bidirectional Long Short-Term Memory (Bi-LSTM) networks within a federated learning framework to effectively accommodate various code-switching methodologies and context-dependent linguistic elements while ensuring data security and privacy across distributed sources. The Federated Bi-LSTM Model demonstrates impressive performance, achieving 99.3% accuracy nearly 19% higher than traditional techniques such as Support Vector Machines (SVM), Multilayer Perceptron (MLP), and Random Forest (RF). This significant improvement underscores the model's capability to efficiently analyse code-mixed text and enhance programming instruction for multilingual learners. However, the model faces limitations in processing highly specialized code-mixed text and adapting to real-time applications. Future research should focus on optimizing the model for these challenges and exploring its applicability in broader domains of computer-assisted education. This model represents a substantial advancement in language-aware computing, offering a promising solution for the evolving needs of adaptive and inclusive programming education technologies. This advancement has the potential to transform language-sensitive computing, providing significant support for multilingual learners and setting a new standard for inclusive programming education.

Author 1: Myagmarsuren Orosoo
Author 2: J Chandra Sekhar
Author 3: Manikandan Rengarajan
Author 4: Nyamsuren Tsendsuren
Author 5: Adapa Gopi
Author 6: Yousef A.Baker El-Ebiary
Author 7: Prema S
Author 8: Ahmed I. Taloba

Keywords: Code-mixed text; text processing; federated learning; bidirectional long short-term memory; programming education; real-time applications; computer-aided education

PDF

Paper 89: A Hybrid DBN-GRU Model for Enhanced Sentiment Analysis in Product Reviews

Abstract: In an era marked by a proliferation of online reviews across various domains, navigating the extensive and diverse range of opinions can be challenging. Sentiment analysis aims to extract and interpret sentiments from these vast pools of data using computational linguistics and information retrieval techniques. This study focuses on employing deep learning methods such as Deep Belief Networks (DBN) and Gated Recurrent Units (GRU) to classify reviews into positive and negative sentiments, addressing the issue of information overload in Product Reviews. The primary objective is to develop an efficient sentiment analysis system that reliably categorizes reviews as positive or negative. The study introduces a novel sentiment analysis framework combining Deep Belief Networks and Gated Recurrent Units for online product review classification, enhancing accuracy through advanced feature extraction and classification techniques. The comprehensive preparation pipeline—comprising data splitting, stemming, stop word removal and special character separation—enhances dataset refinement for improved classification accuracy. The proposed framework consists of four main phases: pre-processing, feature extraction, classification, and evaluation. During the preparation phase, the dataset is meticulously cleaned and refined to reduce noise and enhance signal quality. Significant features are then extracted from the pre-processed data using advanced feature extraction algorithms. The DBN-GRU model leverages these features for sentiment classification, effectively distinguishing between positive and negative attitudes. The framework’s performance is subsequently evaluated to assess its efficacy in accurately classifying reviews. The combination of in-depth pre-processing procedures and the DBN-GRU technique yielded promising results in sentiment categorization. The framework demonstrated a high accuracy of 98.74% in differentiating between positive and negative sentiments, thereby facilitating the effective analysis of online reviews. This study presents a robust framework for sentiment analysis, utilizing the DBN-GRU method to classify online reviews. Through extensive preprocessing and advanced classification techniques, the system addresses the challenges of noise and information overload in online reviews, providing valuable insights for both consumers and businesses.

Author 1: Shaista Khan
Author 2: J Chandra Sekhar
Author 3: J. Ramu
Author 4: Yousef A.Baker El-Ebiary
Author 5: K.Aanandha Saravanan
Author 6: Kuchipudi Prasanth Kumar
Author 7: Prajakta Uday Waghe

Keywords: Sentimental analysis; product review; deep learning; DBN-GRU

PDF

Paper 90: Harnessing Big Data: Strategic Insights for IT Management

Abstract: Big Data analytics has become an essential tool for IT management, enabling data-driven decision-making in various areas, such as resource allocation and strategic planning. This research examines the use of ARIMA (Auto Regressive Integrated Moving Average) models to improve decision-making in IT management. ARIMA is a popular time-series forecasting method that provides predictive skills, allowing businesses to foresee future patterns and base decisions on historical data analysis. ARIMA models are beneficial in strategic planning by predicting market trends, service demand, and IT resource utilization, which helps firms make proactive resource allocation decisions and maximize operational efficiency. Additionally, ARIMA aids predictive maintenance techniques by forecasting equipment failures and maintenance needs, enabling businesses to reduce downtime and interruptions in critical IT systems. For resource allocation, ARIMA simplifies IT budget optimization by predicting spending needs and identifying potential cost-saving areas. Through accurate forecasts of future budgetary requirements, ARIMA facilitates smart financial resource allocation, investment prioritization, and efficient cost containment, all while optimizing value delivery. Furthermore, ARIMA supports risk management initiatives by evaluating and predicting risks associated with IT projects, operations, and investments. Analyzing historical data and identifying potential risks and vulnerabilities, ARIMA enables firms to mitigate risks, limit adverse effects on business operations, and enhance decision-making processes. Integrating ARIMA into data-driven decision-making processes for strategic planning and resource allocation in IT management has great potential to improve organizational efficiency, agility, competitiveness, and effectiveness. Implemented using Python, the proposed approach has an MSE of 1.25, making it more efficient than current techniques like exponential smoothing and moving average.

Author 1: Asfar H Siddiqui
Author 2: Swetha V P
Author 3: Harish Chowdhary
Author 4: R.V.V. Krishna
Author 5: Elangovan Muniyandy
Author 6: Lakshmana Phaneendra Maguluri

Keywords: Autoregressive integrated moving average; big data analytics; strategic planning; IT management; time-series forecasting

PDF

Paper 91: Privacy Protection of Secure Sharing Electronic Health Records Based on Blockchain

Abstract: The secure sharing and privacy protection of medical data have become pain points for medical data management platforms. Therefore, a secure sharing electronic health record privacy protection method based on blockchain is proposed in the study, aiming to improve data security privacy and ensure absolute ownership of patients' medical data. Attribute encryption and blockchain computing are utilized to construct a data secure sharing model, and zero-knowledge proof and ElGamal encryption algorithms are introduced to further improve the construction of data privacy protection methods. Experimental verification showed that the data secure sharing method proposed in the study has more advantages in terms of production key size and time cost. Compared with other public recognition mechanisms, zero-knowledge proof reduced the average time cost of generating keys by 54.36%. The proposed data privacy protection method had an average increase of 7.73% in protection effectiveness compared to other methods. The results indicate that the data secure sharing and privacy protection methods proposed in the study can improve the overall performance and security of the system while fully ensuring the absolute ownership of patients' data. This method has positive application value in the privacy protection of medical data.

Author 1: Yuan Wang
Author 2: Lin Sun

Keywords: Blockchain; secure sharing; electronic health records; privacy protection; zero-knowledge proof; attribute encryption

PDF

Paper 92: Ensemble Machine Learning for Enhanced Breast Cancer Prediction: A Comparative Study

Abstract: Breast cancer poses a significant threat to women’s health, affecting one in every eight women globally and often leading to fatal outcomes due to delayed detection in advanced stages. Recent advancements in machine learning have opened doors to early detection possibilities. This study explores various machine learning algorithms, including K- Nearest Neighbor (KNN), Support Vector Machine (SVM), Multi- Layer Perceptron (MLP), Decision Tree (DT), Logistic Regression (LR), Naive Bayes (NB), Random Forest (RF), Ada Boost (AB), Gradient Boosting (GB), and XGboost (XGB). The employed algorithms, along with nested ensembles of Bagging, Boosting, Stacking, and Voting, predicted whether a cell is benign or malignant using the Wisconsin Diagnostic Breast Cancer (WDBC) dataset. Utilizing the Chi-square feature selection technique, this study identified 21 essential features to enhance prediction accuracy. Results of this study indicate that MLP LR achieved the highest accuracy of 98.25%, closely followed by SVM with 97.08% accuracy. Notably, the Voting classifier yielded the highest accuracy of 99.42% among the ensemble methods. These findings suggest that the research model holds promise for accurate breast cancer prediction, thus contributing to increased awareness and early intervention.

Author 1: Md. Mijanur Rahman
Author 2: Khandoker Humayoun Kobir
Author 3: Sanjana Akther
Author 4: Md. Abul Hasnat Kallol

Keywords: Breast cancer; detection; machine learning; bagging; boosting; stacking; voting; chi square; ensemble; hybrid ensemble; bioinformatics

PDF

Paper 93: Deep Learning-Based Depression Analysis Among College Students Using Multi Modal Techniques

Abstract: This study proposed a novel approach to handle mental health, particularly, depression among college students, called CRADDS A Comprehensive Real-time Adaptive Depression Detection System. The novel CRADDS combined advanced tensor fusion networks which is able to analyze emotions using audio, text and video data more accurately, this is possible due to the strength of deep learning and multimodal approaches. This system is constructed with a hybrid algorithm framework that combines SVM (Support Vector Machines), CNN (Convolutional Neural Network) and (Bidirectional Long-Term Short-Term Memory) BiLSTM techniques. To address the limitations identified in earlier research, CRADDS increasing its feature set and using effective machine learning algorithms to reduce false positives and negatives. Further, it includes the advanced IoT devices to collect real time data from various range of public and private sources. The depression symptoms may be continuously monitored in real time, which helps to identify depressions in early stages and guaranteed the perfect well-being of students. Additionally, the model has the ability to adjust based on the interaction features, which helps to provide psychological support using the automatic responses observed from the verbal and nonverbal clues. Experiments show that the proposed CRADDS obtained an impressive accuracy based on the features of text, audio and video, when compared with the existing models. Overall, CRADDS is a useful tool for mental health professionals and educational institutions because it not only identifies depression but also helps to treat it earlier, and guarantees good academic scores and general well-being. The proposed validation accuracy increases from 63.04% to 86.08% which is higher than compared existing SVM model.

Author 1: Liyan Wang

Keywords: Depression analysis; multimodal techniques; mental health; real-time monitoring; hybrid algorithms

PDF

Paper 94: A Novel Architecture of Depthwise Separable CNN and Multi-Level Pooling for Detection and Classification of Myopic Maculopathy

Abstract: Myopic maculopathy (MM), also known as myopic macular degeneration, is the most serious, irreversible, vision-threatening complication and the leading cause of visual impairment and blindness. Numerous research studies demonstrate that the convolutional neural network (CNN) outperforms many applications. Current CNN designs employ a variety of techniques, such as fixed convolutional kernels, the absolute value layer, data augmentation, and domain knowledge, to enhance performance. However, some network structure designing hasn't received much attention yet. The intricacy of the MM categorization and definition system makes it challenging to employ deep learning (DL) technology in the diagnosis of pathologic myopia lesions. To increase the detection precision of MM's spatial domain, the proposed work first concentrates on creating a novel CNN network structure then improve the convolution kernels in the preprocessing layer. The number of parameters is decreased, and the characteristic of a small local region is modeled using the smaller convolution kernels. Next channel correlation of the residuals with separable convolutions is employed to compress the image features. Then, the local features using the spatial pyramid pooling (SPP) technique is combined, which improves the features' capacity to be represented by multi-level pooling. The use of data augmentation is the final step in enhancing network performance. Compress the residuals in this paper to make use of the channel correlation. The accuracy achieved by the model was 95%, F1-score of 96.5% and AUC of 0.92 on augmented MM-PALM dataset. The paper concludes by conducting a comparative study of various deep-learning architectures. The findings highlight that the hybrid CNN with SPP and XgBoost (Depthwise-XgBoost) architecture is the ideal deep learning classification model for automated detection of four stages of MM.

Author 1: Alaa E. S. Ahmed

Keywords: Retinograph; ophthalmologists; computer-aided diagnosis; vision loss; deep learning; retinograph images; myopic maculopathy

PDF

Paper 95: Towards a Framework for Optimized Microservices Placement in Cloud Native Environments

Abstract: In recent times, cloud-native technologies have increasingly enabled the design and deployment of applications using a microservice architecture, enhancing modularity, scalability, and management efficiency. These advancements are specifically tailored for the creation and orchestration of containerized applications, marking a significant leap forward in the industry. Emerging cloud-native applications employ container-based virtualization instead of the traditional virtual machine approach. However, adopting this new cloud-native approach requires a shift in vision, particularly in addressing the challenges of microservices placement. Ensuring optimal resource utilization, maintaining service availability, and managing the complexity of distributed deployments are critical considerations that necessitate advanced orchestration and automation strategies. We introduce a new framework for optimized microservices placement that optimizes application performance based on resource requirements. This approach aims to efficiently allocate infrastructural resources while ensuring high service availability and adherence to service level agreements. The implementation and experimental results of our method validate the feasibility of the proposed approach.

Author 1: Riane Driss
Author 2: Ettazi Widad
Author 3: Ettalbi Ahmed

Keywords: Cloud native architecture; Service placement; containerization; Cloud resource allocation; microservices architecture

PDF

Paper 96: Advances in Consortium Chain Scalability: A Review of the Practical Byzantine Fault Tolerance Consensus Algorithm

Abstract: Blockchain technology, renowned for its decentralized, immutable, and transparent features, offers a reliable framework for trust in distributed systems. The growing popularity of consortium blockchains, which include public, private, hybrid, and consortium chains, stems from their balance of privacy and collaboration. A significant challenge in these systems is the scalability of consensus mechanisms, particularly when employing the Practical Byzantine Fault Tolerance (PBFT) algorithm. This review focuses on enhancing PBFT's scalability, a critical factor in the effectiveness of consortium chains. Innovations such as Boneh–Lynn–Shacham (BLS) signatures and Verifiable Random Functions (VRF) are highlighted for their ability to reduce algorithmic complexity and increase transaction throughput. The discussion extends to real-world applications, particularly in platforms like Hyperledger Fabric, showcasing the practical benefits of these advancements. This paper provides a concise overview of the latest methodologies that enhance the performance scalability of PBFT-based consortium chains, serving as a valuable resource for researchers and practitioners aiming to optimize these systems for high-performance demands.

Author 1: Nur Haliza Abdul Wahab
Author 2: Zhang Dayong
Author 3: Juniardi Nur Fadila
Author 4: Keng Yinn Wong

Keywords: Blockchain; Practical Byzantine Fault Tolerance (PBFT); consensus algorithm; cryptography

PDF

Paper 97: The Low-Cost Transition Towards Smart Grids in Low-Income Countries: The Case Study of Togo

Abstract: Power grids must integrate information and communication technologies to become intelligent. This integration will enable power grids to be reliable, resilient, and environmentally friendly. The smart grid would help low-income countries to have a more stable power system to boost their development. However, implementing a smart grid is costly and requires specialized skills. This article aims to outline a low-cost transition from conventional power grids to smart grids in low-income countries. It examines the possibility of telecommunications networks participating in implementing smart grids in these countries, to minimize costs. A combination of quantitative and qualitative methods was used. Using Togo as an example, a conceptual scheme for a low-cost smart grid is proposed, with Togo's telecom operators as the telecoms network support. A transition plan to the smart grid is proposed, based on feedback from developed countries.

Author 1: Mohamed BARATE
Author 2: Eyouléki Tcheyi Gnadi PALANGA
Author 3: Ayité Sénah Akoda AJAVON
Author 4: Kodjo AGBOSSOU

Keywords: Smart grid; telecommunications network; low cost; low-income countries

PDF

Paper 98: A Novel Smart System with Jetson Nano for Remote Insect Monitoring

Abstract: Insect monitoring is vital for agricultural management and environmental conservation, but traditional methods are labor-intensive and time-consuming. This paper introduces a novel smart system utilizing NVIDIA's Jetson Nano technology combined with object detection models for remote insect monitoring. The system automates the processes of detection, identification, and monitoring, thereby significantly improving the efficiency and accuracy of insect population assessments. The implementation of the YOLOv7 model on a dataset containing 10 insect species achieved a mAP@0.5 accuracy of 77.2%. This enables farmers to take timely and appropriate measures to prevent pests and diseases, reducing production costs and protecting the environment.

Author 1: Thanh-Nghi Doan
Author 2: Thien-Hue Phan

Keywords: NVIDIA Jetson Nano; insect monitoring; YOLOv7

PDF

Paper 99: AI-IoT Enabled Surveillance Security: DeepFake Detection and Person Re-Identification Strategies

Abstract: Face Recognition serves as a biometric tool and technological approach for identifying individuals based on distinctive facial features and physiological characteristics such as interocular distance, nasal width, lip contours, and facial structure. Among various identification methods, it stands out for its efficacy. However, the emergence of deepfake technology poses a significant security threat to real-time surveillance networks. In response to this challenge, we propose an AI-IoT enabled Surveillance security system framework aimed at mitigating deepfake-related risks. This framework is designed for person identification by leveraging facial features and characteristics. Specifically, we employ a Reinforcement Learning-based Deep Q Network framework for person identification and deepfake detection. Through the integration of AI and IoT technologies, our framework offers enhanced surveillance security by accurately identifying individuals while effectively detecting and combating deepfake-generated content. This research contributes to the advancement of surveillance systems, providing a robust solution to address emerging security threats in real-time monitoring environments. The introduction of this Deep Q Network, is useful to build real-time surveillance framework where live images are identified by a continuous learning mechanism and solves the security issues by a feedback mechanism.

Author 1: Srikanth Bethu
Author 2: M. Trupthi
Author 3: Suresh Kumar Mandala
Author 4: Syed Karimunnisa
Author 5: Ayesha Banu

Keywords: Artificial intelligence; deep learning; face recognition; IoT; reinforcement learning; Deep Q network; deepfake

PDF

Paper 100: Deep Learning-Driven Citrus Disease Detection: A Novel Approach with DeepOverlay L-UNet and VGG-RefineNet

Abstract: Agriculture is essential to the world's desire to produce food, generate income, and maintain livelihoods. Citrus fruits are produced worldwide and have a significant impact on food production, nutrition, and agriculture. During production, farmers face difficulties due to diseases that affect plant growth. Black spot, canker, and greening are some citrus leaf diseases that risk citrus production, resulting in economic losses as well as reduced supply stability. Early detection of these diseases through recent technologies like deep learning will help farmers with better yields and quality. The current methods fall short in marking the area affected by the disease with accuracy and more performance. This work has a novel method proposed for the segmentation and classification of citrus leaf diseases. The method consists of three phases. In the first phase, DeepOverlay L-UNet is used to segment the affected regions. In the second phase, disease detection is carried out using VGG-RefineNet, and in the third phase, the affected region is highlighted in the original image with a severity level. On the other hand, the DeepOverlay L-UNet model proves to be effective in detecting affected areas, thereby enabling clear visualization of the spread of the disease. The result affirms that the proposed method outperforms with a better training IOU of 0.9864 and a validation IOU of 0.9334.

Author 1: P Dinesh
Author 2: Ramanathan Lakshmanan

Keywords: Citrus disease detection; highlighting affected region; Deep learning; semantic segmentation; DeepOverlay L-UNet; VGG-RefineNet

PDF

Paper 101: Enhancing Predictive Analysis of Vehicle Accident Risk: A Fuzzy-Bayesian Approach

Abstract: Although delivery transport activities aim to ensure excellent customer service, risks such as accidents, property damage, and additional costs occur frequently, necessitating risk control and prevention as critical components of transport supply chain quality. This article analyzes the risk of accidents, a fundamental root cause of critical situations that can have significant economic impacts on transport companies and potentially lead to customer loss if recurring. The case study develops a fuzzy Bayesian approach to anticipate accident risks through predictive analysis by combining Bayesian networks and fuzzy logic. Results reveal a strong correlation between fatal injuries in accidents and factors related to driver and vehicle conditions. The predictive model for accident occurrence is validated through three axioms, offering insights for carriers, transport companies, and governments to minimize accidents, injuries, and costs. Moreover, the developed model provides a foundation for various predictive applications in freight transport and other research fields aiming to identify parameters impacting accident occurrence.

Author 1: Houssam Mensouri
Author 2: Loubna Bouhsaien
Author 3: Youssra Amazou
Author 4: Abdellah Azmani
Author 5: Monir Azmani

Keywords: Road traffic injuries; risk management; predictive analysis; Bayesian network; fuzzy logic; accident

PDF

Paper 102: Use of Natural Language Processing Methods in Teaching Turkish Proverbs and Idioms

Abstract: In this study, a series of studies are proposed for easy learning of proverbs and idioms in the language. In Turkish, proverbs and idioms are structures that are used both in the academic environment and in their daily lives, especially by 10-year-old students who have entered the abstract thinking stage. Since this structure contains abstract expressions, it seems difficult to learn at first. In the study, 2396 proverbs and 11209 idioms in the online dictionary of the Turkish Language Association were used. A pre-test was conducted to measure the knowledge level of 20 students selected as the study group. The structure of idioms and proverbs was analyzed using Natural Language Processing methods. With the analysis, difficulty groups were divided according to information such as word count, n-gram analysis, frequency level, and the student was asked questions from the online question pool for the tutorial and the test during the process. Generative artificial intelligence enables semantic analysis of texts containing idioms and proverbs. Following the studies, a test was applied to the students and the efficiency of the process was tried to be measured. As a result, students' idiom knowledge increased by 51.8% and proverb knowledge increased by 59.40%.

Author 1: Ertürk ERDAGI

Keywords: Idiom; proverb; natural language processing; word frequency; n-gram analysis; contextual analysis

PDF

Paper 103: Analysis Performance of One-Stage and Two Stage Object Detection Method for Car Damage Detection

Abstract: The large use of private cars is directly proportional to the number of insurance claims. Therefore, insurance companies need a breakthrough or new approach that is more effective and efficient to be able to compete for the trust of their customers. One approach that can be taken is to use artificial intelligence to detect damage to the car body to speed up the claims process. In this research, several experiments will be carried out using various types of models, namely Mask-R-CNN, ResNet50, MobileNetv2, YOLO-v5, and YOLO-v8 to detect damage to the car body. Furthermore, in the experiments that were carried out, the best results were obtained using the YOLO-v8x model with precision, recall, and F1-score values of 0.963, 0.951, and 0.936 respectively.

Author 1: Harum Ananda Setyawan
Author 2: Alhadi Bustamam
Author 3: Rinaldi Anwar Buyung

Keywords: Car damage detection; insurance claim; deep learning; object detection

PDF

Paper 104: Innovative Approaches to Agricultural Risk with Machine Learning

Abstract: Agriculture is fraught with uncertainties arising from factors like weather volatility, pest outbreaks, market fluctuations, and technological advancements, posing significant challenges to farmers. By gaining insights into these risks, farmers can enhance decision-making, adopt proactive measures, and optimize resource allocation to minimize negative impacts and maximize productivity. The research introduces an innovative approach to risk prediction, highlighting its pivotal role in improving agricultural practices. Through meticulous analysis and optimization of a farmer dataset, employing pre-processing techniques, the study ensures the reliability of predictive models built on high-quality data. Utilizing Variation Inflation Factor (VIF) for feature selection, the study identifies influential features critical for accurate risk classification. Employing techniques like KNN, Random Forest, logistic regression, SVM, Ridge classifier, Gradient Boosting and XGBoost, the study achieves promising results. Among them KNN, random forest, Gradient Boosting and XGBoost scored with high accuracy of 88.46%. This underscores the effectiveness of the proposed methodology in providing actionable insights into potential risks faced by farmers, enabling informed decision-making and risk mitigation strategies.

Author 1: Sumi. M
Author 2: S. Manju Priya

Keywords: Random forest; ridge classifier; logistic regression; gradient boosting; extreme gradient boost; Variation Inflation Factor; support vector machine; farmer risk prediction; agricultural risk

PDF

Paper 105: Knowledge Graph-Based JingFang Granules Efficacy Analysis for Influenza-Like Illness

Abstract: This study presents a novel approach to evaluate the efficacy of JingFang granules in treating influenza-like illness by integrating knowledge graph technology with clinical trial data. We developed an innovative knowledge graph-based pharmacological analysis method and validated its effectiveness through a randomized controlled clinical trial. A knowledge graph was constructed by extracting drug-disease entities and their relationships from the literature using a machine learning workflow. Deep mining of the knowledge graph was performed using a graph convolutional network and T5 mini-model to analyze the association between JingFang and various diseases. Subsequently, a randomized controlled clinical trial involving 106 patients was conducted. Results showed that the cure rate in the JingFang combined treatment group (92.5%) was significantly higher than in the control group (81.1%), especially among the middle-aged and elderly population. Subgroup analysis revealed that JingFang had a more pronounced therapeutic effect on patients aged 34 and above, consistent with the knowledge graph analysis results. The innovation of this study lies in proposing a novel framework for evaluating therapeutic efficacy by combining knowledge graphs with clinical trial results. This approach not only provides new analytical tools for similar drug development but also improves the efficiency and accuracy of drug development by systematically validating literature efficacy data and integrating it with actual clinical trial results. Furthermore, applying a knowledge graph to evaluate the therapeutic effects of traditional Chinese medicines like JingFang is an innovative and unique approach, bringing new perspectives to this under-explored field. This method holds potential for broad application in drug development and repurposing, particularly in the context of Traditional Chinese Medicine.

Author 1: Yuqing Li
Author 2: Zhitao Jiang
Author 3: Zhiyan Huang
Author 4: Wenqiao Gong
Author 5: Yanling Jiang
Author 6: Guoliang Cheng

Keywords: Knowledge graph; clinical trial; influenza-like illness; jingfang; drug efficacy analysis

PDF

Paper 106: Exploring Photo-Based Dialogue Between Elderly Individuals and Generative AI Agents

Abstract: Japan's rapid transition into a super-aged society, with 29% of its population aged 65 and over, underscores the urgent need for innovative elderly care solutions. This study explores the use of generative AI to facilitate meaningful interactions between elderly individuals and AI conversational agents using photos. Utilizing Microsoft Azure's AI services, including Computer Vision and Speech, the AI agent analyzes photos to generate engaging conversation prompts, leveraging GPT-3.5-turbo for natural language processing. Preliminary experiments with healthy elderly participants provided insights to refine the AI agent's conversational skills, focusing on timing, speech speed, and emotional engagement. The findings indicate that elderly users respond positively to AI agents that exhibit human-like conversational behaviors, such as attentiveness and expressive communication. By addressing functional and emotional needs, the AI agent aims to enhance the quality of life for the elderly, offering scalable solutions to the challenges of an aging society. Future work will focus on further improving the AI agent's ca-pabilities and assessing its impact on the mental health and so-cial engagement of elderly users.

Author 1: Kousuke Shimizu
Author 2: Banba Ami
Author 3: Choi Dongeun
Author 4: Miyuki Iwamoto
Author 5: Nahoko Kusaka
Author 6: Panote Siriaraya
Author 7: Noriaki Kuwahara

Keywords: Generative AI; elderly care; conversational agents; photo-based interaction

PDF

Paper 107: The Application of Blockchain Technology in Network Security and Authentication: Issues and Strategies

Abstract: With the advent of the digital age, the importance of network security and authentication is gradually highlighted. Blockchain technology, as a distributed, immutable record technology, brings great potential value to both areas. This study aims to delve into how blockchain technology can ensure network security and its application in authentication. Through extensive questionnaires and data collection, the study successfully built a deep regression model to reveal relevant causal relationships. The findings show that the adoption of blockchain technology can significantly improve the perceived effectiveness of cybersecurity, especially when organizations have a high opinion of it. This finding provides a valuable reference for organizations to make better use of this technology. However, there are still some limitations in the study, such as the scope of data collection and the complexity of the model. For these problems, this paper also puts forward corresponding solutions.

Author 1: Yanli Lu

Keywords: Blockchain; network security; identity verification; deep regression model

PDF

Paper 108: Optimization of Green Supply Chain Management Based on Improved MPA

Abstract: With the advancement of industrialization and urbanization in the global market, the contradiction between economic development and environmental protection is becoming increasingly prominent. In response to the optimization problem, this study constructs a green supply chain network problem model with green constraints. In the second half of the iteration of the ocean predator algorithm, Gaussian mutation is used to replace the original fish swarm aggregation device effect, proposing an improved ocean predator algorithm to solve the green supply chain network model. The results demonstrated that the designed algorithm performed greater than other algorithms on all four benchmark functions. Except for the mean value of 2.17×10-202 when solving function 1, the other mean and standard deviation were all 0. When solving the multi-modal benchmark test function, the proposed algorithm still had the fastest convergence speed and the difference was more obvious. In small-scale testing sets, the proposed algorithm could find the best solution for the test instance, resulting in a lower total cost of 139,832.97 yuan, 148,561.28 yuan, and 147,535.81 yuan, respectively. In three different scale test sets, the proposed algorithm had the fastest convergence speed and successfully converged to feasible solutions. The research results verified the algorithm performance and its good application effect in handling green supply chain network problems, which helps optimize it.

Author 1: Dan Li

Keywords: Green supply chain; supply chain management; marine predator algorithm; optimization problem; fish gathering device

PDF

Paper 109: Generating New Ulos Motif with Generative AI Method in Digital Tenun Nusantara (DiTenun) Platform

Abstract: DiTenun is a startup developing a platform that utilizes artificial intelligence to create innovative digital textile patterns for woven fabrics. One of the woven motifs produced is the Ulos motif, a traditional weaving from the Batak tribe that consists of various types, patterns/motifs, and sizes. Currently, DiTenun platform applies two methods to generate Ulos motifs: image quilting and SinGAN. The image quilting method uses synthetic textures to form a new texture by combining blocks from the original texture. The SinGAN is a Generative Adversarial Network (GAN) method that accepts one image motif as input to generate a new motif that resembles the training motif. The new motifs generated by both methods are still repetitive and not diverse (less variation). Therefore, this paper focuses on improving the StyleGAN method, which utilizes two or more Ulos motif images as input to produce new innovative motifs by mixing regularization. Six experimental scenarios are carried out on the Ulos motif image dataset with different numbers of input motifs and hyperparameter tuning. The experiment results are new images with diverse patterns, colour combinations, and merge motif elements. The StyleGAN performance is measured with Frechet Inception Distance (FID) and Kernel Inception Distance (KID) to find the best-quality motif generated based on the six hyperparameter tuning scenarios. The results show that the fourth scenario on Ulos Batak Karo, Gundur Category (Min Max Resolution: 8 and 256, number images 4, on training iteration per resolution = 100000 and max iteration = 50000000) is the best motif generated, based on FID and KID score, are 91.32 and 0.04, respectively.

Author 1: Humasak Simanjuntak
Author 2: Evelin Panjaitan
Author 3: Sandraulina Siregar
Author 4: Unedo Manalu
Author 5: Samuel Situmeang
Author 6: Arlinta Barus

Keywords: Generate Ulos motif; StyleGAN; DiTenun; generative AI Ulos motif

PDF

Paper 110: eTNT: Enhanced TextNetTopics with Filtered LDA Topics and Sequential Forward / Backward Topic Scoring Approaches

Abstract: TextNetTopics is a novel text classification-based topic modelling approach that focuses on topic selection rather than individual word selection to train a machine learning algorithm. However, one key limitation of TextNetTopics is its scoring component, which evaluates each topic in isolation and ranks them accordingly, ignoring the potential relationships between topics. In addition, the chosen topics may contain redundant or irrelevant features, potentially increasing the feature set size and introducing noise that can degrade the overall model performance. To address these limitations and improve the classification performance, this study introduces an enhancement to TextNetTopics. eTNT integrates two novel scoring approaches: Sequential Forward Topic Scoring (SFTS) and Sequential Backward Topic Scoring (SBTS), which consider topic interactions by assessing sets of topics simultaneously. Moreover, it incorporates a filtering component that aims to enhance topics' quality and discriminative power by removing non-informative features from each topic using Random Forest feature importance values. These integrations aim to streamline the topic selection process and enhance classifier efficiency for text classification. The results obtained from the WOS-5736, LitCovid, and MultiLabel datasets provide valuable insights into the superior effectiveness of eTNT compared to its counterpart, TextNetTopics.

Author 1: Daniel Voskergian
Author 2: Rashid Jayousi
Author 3: Burcu Bakir-Gungor

Keywords: Topic scoring; topic modeling; text classification; machine learning

PDF

Paper 111: Computer Aided Classification of Lung Cancer, Ground Glass Lung and Pulmonary Fibrosis Using Machine Learning and KNN Classifier

Abstract: Respiratory diseases are one of the most prevalent acute and chronic ailments worldwide. According to a recent survey, there were around 545 million cases of chronic respiratory diseases worldwide. Chronic respiratory diseases such as chronic obstructive pulmonary disease (COPD), pneumoconioses, asthma, interstitial lung disease and pulmonary sarcoidosis are significant public health problems across the world. The most significant CRD (Chronic Respiratory Disease) risks have been identified including smoking, contact with indoor and outdoor pollutants, allergies, occupational exposure, poor nutrition, obesity, inactivity and other factors. Interstitial lung diseases are diagnosed on high-resolution computed tomography (HRCT) using a variety of different interstitial pattern namely such as reticular, nodular, reticulonodular, ground-glass lung, cystic, ground-glass with reticular, cystic with ground-glass. If the lung diseases are identified at an early stage life span could be increased. Computer aided diagnosis could play a crucial role in identifying lung diseases at an early stage, disease management and treatment planning. In this paper a novel method is proposed to identify and classify HRCT images of cancerous lung using ML (Machine Learning) and to identify and classify ground glass lung, pulmonary fibrosis lung and healthy lung HRCT images using LBP (Local Binary Pattern) and KNN (K-Nearest Neighbor) classifier. Experimenting the proposed method on 996 images yielded 94% accuracy.

Author 1: Prathibha T P
Author 2: Punal M Arabi

Keywords: Ground glass; healthy; KNN; LBP; lung cancer; lung diseases classification; LBP; ML and pulmonary fibrosis

PDF

Paper 112: An Integrated Approach for Real-Time Gender and Age Classification in Video Inputs Using FaceNet and Deep Learning Techniques

Abstract: The increasing demand for real-time gender and age classification in video inputs has spurred advancements in computer vision techniques. This research work presents a comprehensive pipeline for addressing this challenge, encompassing three pivotal tasks: face detection, gender classification, and age estimation. FaceNet effectively identifies faces within video streams, serving as the foundation for subsequent analyses. Moving forward, gender classification is achieved by utilizing a finely tuned ResNet34 model. The model is trained as a binary classifier for the gender identification. The optimization process employs a binary cross-entropy loss function facilitated by the ADAM optimizer with a learning rate of 1e-2. The achieved accuracy of 97% on the test dataset demonstrates the model's proficiency. The ADAM optimizer with a learning rate 1e-3 is used to train with the Mean Absolute Error (MAE) loss function. The evaluation metric, MAE, underscores the model's effectiveness, with an achieved MAE error of 6.8, signifying its proficiency in age estimation. The comprehensive pipeline proposed in this research showcases the individual components' efficacy and demonstrates the synergy achieved through their integration. Experimental results substantiate the pipeline's capacity for real-time gender and age classification within video inputs, thus opening avenues for applications spanning diverse domains.

Author 1: Abhishek Nazare
Author 2: Sunita Padmannavar

Keywords: Gender classification; age estimation; face detection; FaceNet; ResNet34; computer vision techniques

PDF

Paper 113: Modification of the Danzig-Wolf Decomposition Method for Building Hierarchical Intelligent Systems

Abstract: This article examines the Dantzig-Wolfe decomposition method for solving large-scale optimization problems. The standard simplex algorithm solves these problems, making the Dantzig-Wolfe method a valuable tool. The article describes in detail a new modification of the Dantzig-Wolfe decomposition method. This modification aims to improve the efficiency of the coordination task, a key component that defines subtasks. By significantly reducing the number of rows in the coordination problem, the proposed method achieves faster computation and reduced memory requirements compared to the original approach. Although the Dantzig-Wolfe method has encountered problems due to the complexity of implementing algorithms for hierarchical systems, this modification opens up new potential.

Author 1: Turganzhan Velyamov
Author 2: Alexandr Kim
Author 3: Olga Manankova

Keywords: Decomposition method; optimization; parallel processing; linear programming

PDF

Paper 114: An Improved Liver Disease Detection Based on YOLOv8 Algorithm

Abstract: The identification and diagnosis of liver diseases hold significant importance within the domain of digital pathology research. Various methods have been explored in the literature to address this crucial task, with deep learning techniques emerging as particularly promising due to their ability to yield highly accurate results compared to other traditional approaches. However, despite these advancements, a significant research gap persists in the field. Many deep learning-based liver disease detection methods continue to struggle with achieving consistently high accuracy rates. This issue is highlighted in numerous studies where traditional convolutional neural networks and hybrid models fall short in precision and recall metrics. To bridge this gap, our study proposes a novel approach utilizing the YOLOv8 algorithm, which is designed to significantly enhance the accuracy and effectiveness of liver disease detection. The YOLOv8 algorithm's architecture is well-suited for real-time object detection and has been optimized for medical imaging applications. Our method involves generating innovative models tailored specifically for liver disease detection by leveraging a comprehensive dataset from the Roboflow repository, consisting of 3,976 annotated liver images. This dataset provides a diverse range of liver disease cases, ensuring robust model training. Our approach includes meticulous model training with rigorous hyperparameter tuning, using 70% of the data for training, 20% for validation, and 10% for testing. This structured training process ensures that the model learns effectively while minimizing overfitting. We evaluate the model using precision, recall, and mean average precision (mAP@0.5) metrics, demonstrating significant improvements over existing methods. Through extensive experimental results and detailed performance evaluations, our study achieves high accuracy rates, thus addressing the existing research gap and providing an effective approach for liver disease detection.

Author 1: Junjie Huang
Author 2: Caihong Li
Author 3: Fengjun Yan
Author 4: Yuanchun Guo

Keywords: Liver disease detection; deep learning; digital pathology; YOLOv8; accuracy enhancement

PDF

Paper 115: Reliability in Cloud Computing Applications with Chaotic Particle Swarm Optimization Algorithm

Abstract: In recent years, IT managers of large enterprises and stakeholders have turned to cloud computing due to the benefits of reduced maintenance costs and security concerns, as well as access to high-performance hardware and software resources. The two main challenges that need to be considered in terms of importance are ensuring that everyone has access to services and finding efficient allocation options. First, especially with software services, it is very difficult to predict every service that may be needed. The second challenge is to select the best independent service among different providers with features related to application reliability. This paper presents a framework that uses the particle swarm optimization technique to optimize reliability parameters in distributed systems applications. The proposed strategy seeks a program with the best service and a high degree of competence. Although this method does not provide an exact solution, the particle swarm optimization algorithm reaches a result close to the best solution and reduces the time required to adjust the parameters of distributed systems applications. The results of the work have been compared with the genetic algorithm and it has been shown that the PSO algorithm has a shorter response time than both the genetic algorithm and the PSO. Also, the PSO algorithm shows strong stability and ensures that the solution obtained from the proposed approach will be close to the optimal solution.

Author 1: Wenli WANG
Author 2: Yanlin BAI

Keywords: Reliability; cloud computing; chaotic particle swarm optimization algorithm; distributed systems

PDF

Paper 116: Advanced Active Player Tracking System in Handball Videos Using Multi-Deep Sort Algorithm with GAN Approach

Abstract: Active player tracking in sports analytics is crucial for understanding team dynamics, player performance, and game strategies. This paper introduces an innovative approach to tracking active players in handball videos using a fusion of the Multi-Deep SORT algorithm and a Generative Adversarial Network (GAN) model. The novel integration aims to enhance player appearance for robust and precise tracking in dynamic gameplay. The system starts with a GAN model trained on annotated handball video data, generating synthetic frames to improve the visual quality and realism of player appearances, thereby refining the input data for tracking. The Multi-Deep SORT algorithm, enhanced with GAN-generated features, improves object association and continuous player tracking. This framework addresses key challenges in active player tracking, handling occlusions, variations in player appearances, and complex interactions. Additionally, GAN-based enhancements improve accuracy in distinguishing active from inactive players, facilitating precise localization and recognition. Performance evaluation demonstrates the system's efficacy in achieving high tracking accuracy, robustness, and differentiation between player activity levels. Metrics such as Average Precision (AP), Average Recall (AR), accuracy, and F1-score affirm the system's advancement in active player tracking. This pioneering fusion of Multi-Deep SORT with GAN-based player appearance enhancement sets a new standard for precise, robust, and context-aware active player tracking in handball videos. It offers comprehensive insights for coaches, analysts, and players to optimize team strategies and performance. This paper highlights the novel integration's advancements and benefits in the domain of sports analytics. Notably, the proposed method achieved enhanced efficiency with an average precision of 94.99%, recall of 93.67%, accuracy of 93.89%, and F-score of 94.33%.

Author 1: Poovaraghan R J
Author 2: Prabhavathy P

Keywords: Handball recognition; multi-deep SORT; GAN; deep learning; computer vision

PDF

Paper 117: An Improved Genetic Algorithm and its Application in Routing Optimization

Abstract: Traditional routing algorithms can't adapt to the complex and changeable network environment, and the basic genetic algorithm can't be applied to solving routing optimization problems directly because of the lack of coding methods. An improved basic genetic algorithm was purposed to find the optimal or near-optimal routing. The network model and mathematical expression of routing optimization problem was defined, and the routing problem was transformed into a problem of finding the optimal solution. In order to meet the specific needs of network routing optimization, some key improvements of GA have been made, including the design of coding scheme, the generation of initial population, the construction of fitness function and the improvement of crossover operator and mutation operator. The simulation results of two typical network environments show that the improved GA has excellent performance in routing optimization. Compared with Dijkstra algorithm and Floyd algorithm, the improved GA in this paper not only has excellent robustness and adaptability in solving routing optimization problems, but also can effectively cope with the dynamic changes of network environment, providing an efficient and reliable routing solution for dynamic network environment.

Author 1: Jianwei Wang
Author 2: Wenjuan Sun

Keywords: Improvement of genetic algorithm; routing optimization; shortest path; crossover operator; mutation operator

PDF

Paper 118: An Analysis of the Effect of Using Online Loans on User Data Privacy

Abstract: Online loans deliberately leak user data. The entry of the digital ecosystem at the beginning of the 20th century initiated major changes in society in the way information is controlled, communicated and expressed. Industrial development in Indonesia is growing very rapidly, especially along with the progress of the digital economy industry. Changes in the digital economy have changed the way we access the economy. The role of digitalization has changed the way we work and the way society collaborates with other parties. This digitalization cannot be separated from the role of financial technology, including online loans. Digitalization can simplify the lending and borrowing process and increase accessibility so that it can be done efficiently. However, we also have to be aware of the risks of online loans, especially in terms of user privacy and data security. According to the Financial Services Authority (OJK) rules, incidents of data privacy violations and online loan data leaks in 2022 will reach 1,200 cases. One case is when a loan provider acts by accessing a user's personal data to intimidate and threaten. They even made the situation even more threatening by coming to the user's location with several hired thugs to intimidate them into physical confrontation and making several unreasonable demands such as increasing loan interest. In some cases, there are fintechs who deliberately sell or trade some of their users' personal data for their own profit. This research aims to provide education about the importance of clear regulations from the central government regarding the Peer-to-Peer lending industry. This research uses a systematic literature review method to be more structured and objective in writing this paper.

Author 1: Indrajani Sutedja
Author 2: Muhammad Firdaus Adam
Author 3: Fauzan Hafizh
Author 4: Muhammad Farrel Wahyudi

Keywords: Online loans; data privacy; peer-to-peer lending; OJK; fintechs

PDF

Paper 119: Forecast for Container Retention in IoT Serverless Applications on OpenWhisk

Abstract: This research tackles resource management in OpenWhisk-based serverless applications for the Internet of Things (IoT) by introducing a novel approach to container retention optimization. We leverage the capabilities of AWS Forecast, specifically its DeepAR+ and Prophet algorithms, to dynamically forecast workload patterns. This real-time forecast empowers us to make adaptive adjustments to container retention durations. By optimizing retention times, we can effectively mitigate cold start latency, the primary reason behind sluggish response times in IoT serverless environments. Our approach outperforms conventional preloading and chaining techniques by significantly increasing resource utilization efficiency. Since OpenWhisk is an open-source platform, our methodology was able to achieve a cost reduction. By integrating it with Amazon Forecast's built-in algorithms, we surpassed traditional cache cold start strategies. These findings strongly support the viability of dynamic container retention optimization for IoT serverless deployments. Evaluations conducted on the OpenWhisk platform demonstrate substantial benefits. We observed a remarkable 67% reduction in cold start latency, translating to expedited response times and a demonstrably enhanced end-user application experience. These findings convincingly validate the efficacy of AWS Forecast in optimizing container retention for IoT serverless deployments by capitalizing on its deep learning (DeepAR+) and interpretable forecasting (Prophet) abilities. This research lays a solid foundation for future studies on optimizing container management across various DevOps practices and container orchestration platforms, contributing to the advancement of efficient and responsive serverless architectures.

Author 1: Ganeshan Mahalingam
Author 2: Rajesh Appusamy

Keywords: Serverless IoT; AWS Forecast Deep AR+; Prophet; AWS EKS; docker and containers; cold start; OpenWhisk

PDF

Paper 120: Original Strategy for Verbatim Collecting Knowledge from Mostly-Illiterate and Secretive Experts: West Africa Traditional Medicine’s Case

Abstract: 80% of least developed countries populations rely on traditional medicine (TM). West Africa is not left outdone. Multilingualism is very manifest. Additionally, TM practitioners (TMP) commonly desire to keep secret their knowledge. Illiteracy affects the vast majority of TMP in this region. Thus, exchanges between practitioners for knowledge and experience sharing are severely hindered by multilingualism, illiteracy and secretiveness. The reliability and relevance question of the data and knowledge gathered from these practitioners is therefore raised. Conventional data collection methods are not operational in this context. Hence, we designed an original collection data method that we called back-and-forth, to overcome these difficulties. Such method allows us to obtain stable and verbatim collection from the TMP. Both sequential and recursive, it is applied to data collection during visits carried out for 110 practitioners in West Africa, with two to four visits per practitioner. 79 practitioners were finally included in the study project. The others 31 either did not adhere to the project or provided unstable knowledge. 13 diseases and 12 plants were collected, with the "plant cure disease" relations between them, as expressed by these 79 practitioners. Our second objective was to extend the domain ontology of west Africa TM, accurately ontoMEDTRAD, due to the emergence of three new concepts arising from the above. Face to climate change that may lead to some plants extinction, to update some old reference sources contents of TM, it has proved necessary to compare them with the opinions and knowledge collected from TMP.

Author 1: Kouamé Appoh
Author 2: Lamy Jean-Baptiste
Author 3: Kroa Ehoulé

Keywords: Knowledge elicitation; collection data method; ontology; traditional medicine; West Africa; ontoMEDTRAD

PDF

Paper 121: Tunisian Lung Cancer Dataset: Collection, Annotation and Validation with Transfer Learning

Abstract: Globally, lung cancer remains the leading cause of cancer-related deaths, with early detection significantly improving survival rates. Developing robust machine learning models for early detection necessitates access to high-quality, localized datasets. This project establishes the first lung cancer dataset in Tunisia, utilizing DICOM CT scans from 123 Tunisian patients. The dataset, annotated by experienced radiologists, includes diverse forms of lung cancer at various stages. Using transfer learning with pre-trained 3D ResNet models from Tencent’s MedicalNet, our tests showed the dataset outperformed previous models in specificity and sensitivity. This demonstrates its effectiveness in capturing the unique clinical characteristics of the Tunisian population and its potential to significantly enhance lung cancer diagnosis and detection.

Author 1: Omar Khouadja
Author 2: Mohamed Saber Naceur
Author 3: Samira Mhamedi
Author 4: Anis Baffoun

Keywords: Lung cancer; Tunisia; dataset; transfer learning; medical imaging; annotations

PDF

Paper 122: Ensemble Feature Selection for Student Performance and Activity-Based Behaviour Analysis

Abstract: Analyzing students' behaviour during online classes is vital for teachers to identify the strengths and weaknesses of online classes. This analysis, based on observing academic performance and student activity data, helps teachers to understand the teaching outcomes. Most Educational Data Mining (EDM) processes analyze students' academic or behavioural data; in this case, the accurate prediction of student behaviours could not be achieved. This study addresses these issues by considering student’s activity and academic performance datasets to evaluate teaching and learner outcomes efficiently. It is necessary to utilize a suitable method to handle the high dimensional data while analyzing Educational Data (ED), because academic data is growing daily and exponentially. This study uses two kinds of data for student behaviour analysis. It is essential to use feature reduction and selection methods to extract only important features to improve the student’s behaviour analysis performance. By utilizing a hybrid ensemble method to get the most relevant features to predict students’ performance and activity levels, this approach helps to reduce the complexity of the feature-learning model and improve the prediction performance of the classification model. This study uses Improved Principal Component Analysis (IPCA) to select the most relevant feature. The resultant features of the IPCA are given as input to an ensemble method to select the most relevant feature sets to improve the prediction accuracy. The prediction is done with the help of Residual Network-50 (ResNet50) is combined with a Support Vector Machine (SVM) to classify students' performance and activity during online classes. This performance analysis evaluates the students’ behaviour analysis model. The proposed approach could predict the performance and activity of students with a maximum of 98.03% accuracy for online classes, and 98.06% accuracy for exams.

Author 1: Varsha Ganesh
Author 2: S Umarani

Keywords: Behaviour analysis; deep learning; educational data mining; student performance prediction; students activity monitoring; machine learning

PDF

Paper 123: Diagnosing People at Risk of Heart Diseases Using the Arduino Platform Under the IoT Platform

Abstract: Using the Arduino platform under the Internet of Things (IoT) platform to diagnose individuals at risk of heart diseases. An enormous volume of data focus has been placed on delivering high-quality healthcare in response to the increasing prevalence of life-threatening health conditions among patients. Several factors contribute to the health conditions of individuals, and certain diseases can be severe and even fatal. Both in industrialised and developing nations, cardiovascular illnesses have surpassed all others as the leading causes in the last few decades. Significant decreases in mortality may be achieved by detecting cardiac problems early and keeping medical experts closely monitored. Unfortunately, it is not currently possible to accurately detect heart diseases in all cases and provide round-the-clock consultation with medical experts. This is due to the need for additional knowledge, time, and expertise. Aiming to identify possible heart illness using Deep Learning (DL) methods, this research proposes a concept for an IoT-based system that could foresee the occurrence of heart disease. This paper introduces a pre-processing technique, Transfer by Subspace Similarity (TBSS), aimed at enhancing the accuracy of electrocardiogram (ECG) signal classification. This proposed IoT implementation includes using the Arduino IoT operating system to store and evaluate data gathered by the Pulse Sensor. The raw data collected includes interference that decreases the precision of the classification. A novel pre-processing technique is used to remove distorted ECG signals. To find out how well the classifier worked, this study used the Hybrid Model (CNN-LSTM) classifier algorithms. These algorithms detect normal and abnormal heartbeat rates based on temporal and spatial features. A Deep Learning (DL) model that uses Talos for hyper-parameter optimisation has been recommended. This approach dramatically improves the accuracy of heart disease predictions. The experimental findings clearly show that Machine Learning (ML) methods for classification perform much better after pre-processing. Using the widely recognised MIT-BIH-AR database, we assess the planned outline in comparison to MCH ResNet. This system leverages a CNN-LSTM model, which was optimized using hyper-parameter tuning with Talos, achieving outstanding metrics. Specifically, it recorded an accuracy of 99.1%, a precision of 98.8%, a recall of 99.5%, an F1-score of 99.1%, and an AUC-ROC of 0.99.

Author 1: Xiaoxi Fan
Author 2: Qiaoxia Wang
Author 3: Yao Sun

Keywords: Arduino platform; internet of things; heart disease diagnosis; high-quality healthcare; cardiovascular diseases; deep learning

PDF

Paper 124: Hybrid CNN: An Empirical Analysis of Machine Learning Models for Predicting Legal Judgments

Abstract: Artificial Intelligence with NLP has revolutionized the legal industry, which was previously under-digitized, and it's eager to adopt digital technologies for increased efficiency. Case backlog issues, exacerbated by population growth, can be alleviated by AI's potential in decision prediction for laypeople, litigants, and adjudicators. Legal judgment prediction (LJP) is viewed as a text classification cum prediction problem, with encoding models crucial for accurate textual representation and downstream tasks. These models capture syntax, semantics, and context, varying in performance based on the task and dataset. Selecting the right model, whether traditional ML or DL, using different evaluation metrics, is complex. This paper addresses the above research gap by reviewing 12 cutting-edge ML models and 10 DL models with two embedding methods on real-time Madras High Court criminal cases from Manupatra. The comprehensive comparison of classifier models on real-time case documents provides insights for researchers to innovate despite challenges and limitations. Evaluation metrics like accuracy, F1 score, precision, and recall show that Support Vector Machines (SVM), Logistic Regression, and SGD with Doc2Vec (D2V) encoding and shallow neural networks perform well. Although Transformers process longer input sequences with parallel word analysis and self-attention layers, they have weaknesses on real-time datasets. This article proposes a novel hybrid CNN with a transformer model to predict binary judgments, outperforming traditional ML and DL models in precision, recall, and accuracy. Finally, we summarise the most important ramifications, potential research avenues, and difficulties facing the legal research field.

Author 1: G. Sukanya
Author 2: J. Priyadarshini

Keywords: Legal judgment prediction; encoding; SVM; SGD; Doc2vec; CNN; transformers

PDF

Paper 125: Exploring Google Play Store Apps Using Predictive User Context: A Comprehensive Analysis

Abstract: Google Play Store is a digital platform for mobile applications, where users can download and install apps for their android devices. It is a great source of data for mining and analyzing app performance and user behavior. The increasing volume of mobile applications poses a challenge for users in finding apps that align with their preferences. This work aims to utilize predictive user context to analyze user behavior, thereby enhancing user experience and app development. The work focuses on identifying trends in the app market to recommend suitable applications for users. Play Store app analysis involves gathering data, performing comprehensive evaluations, and making informed decisions to improve app performance and user engagement. By applying Naïve Bayes, Random Forest, and Logistic Regression algorithms, this work evaluates the relationship between application attributes such as categories and the number of downloads, determining the most effective profiling algorithm for app performance evaluation. This analysis is crucial for recognizing user engagement trends, discovering new opportunities, and optimizing existing applications.

Author 1: Anandh A
Author 2: Ramya R
Author 3: Vakaimalar E
Author 4: Santhipriya B

Keywords: Naïve Bayes; random forest; logistic regression; mining; Google play store; android; mobile application

PDF

Paper 126: Unleashing the Power of Open-Source Transformers in Medical Imaging: Insights from a Brain

Abstract: This research investigates the application of open-source transformers, specifically the ConvNeXt V2 and Seg-former models, for brain tumor classification and segmentation in medical imaging. The ConvNeXt V2 model is adapted for classification tasks, while the Segformer model is tailored for segmentation tasks, both undergoing a fine-tuning process involving model initialization, label encoding, hyperparameter adjustment, and training. The ConvNeXt V2 model demonstrates exceptional performance in accurately classifying various types of brain tumors, achieving a remarkable accuracy of 99.60%. In comparison to other state-of-the-art models such as ConvNeXt V1, Swin, and ViT, ConvNeXt V2 consistently outperforms them, attaining superior accuracy rates across all metrics for each tumor type. Surprisingly, when there is no tumor present, it has predicted with 100% accuracy. In contrast, the Segformer model has excelled in accurately segmenting brain tumors, achieving a Dice score of up to 90% and a Hausdorff distance of 0.87mm. These results underscore the transformative potential of open-source transformers, exemplified by ConvNeXt V2 and Segformer models, in revolutionizing medical imaging practices. This study paves the way for further exploration of transformer applications in medical imaging and optimization of these models for enhanced performance, heralding a promising future for advanced diagnostic tools.

Author 1: M. A. Rahman
Author 2: A. Joy
Author 3: A. T. Abir
Author 4: T. Shimamura

Keywords: Open-source transformers; ConvNeXt V2; seg-former; brain tumor classification; medical image segmentation; diagnostic accuracy; neuro-oncology

PDF

Paper 127: A Multi-Criteria Decision-Making Approach for Equipment Evaluation Based on Cloud Model and VIKOR Method

Abstract: Equipment evaluation stands as a critical task in both equipment system development and military operation planning. This task is often recognized as a complex multi-criteria decision-making (MCDM) problem. Adding to the intricacy is the uncertain nature inherent in military operations, leading to the introduction of fuzziness and randomness into the equipment evaluation problem, rendering it unsuitable for precise information. This paper addresses the uncertainty associated with equipment evaluation by proposing a novel MCDM method that combines the cloud model and the VIKOR method. To address the multifaceted nature of the equipment evaluation problem, a two-level hierarchical evaluation framework is constructed, which comprehensively considers both the capabilities and characteristics of the equipment system during the evaluation process. The cloud model is then employed to represent the uncertain evaluations provided by experts, and a similarity-based expert weight calculation approach is introduced for calculating expert weights, thereby determining the relative importance of different experts. Subsequently, the VIKOR method is extended by incorporating the cloud model to evaluate and rank various equipment systems, where the criteria weights for this evaluation are established using the analytic hierarchy process (AHP). To demonstrate the efficacy of the proposed method, a practical case study involving the evaluation of unmanned combat aerial vehicles is presented. The results obtained are validated through sensitivity analysis and comparative analysis, affirming the reliability and reasonability of the proposed method in providing equipment evaluation results. In summary, the proposed method offers a novel and effective approach for addressing equipment evaluation challenges under uncertainty.

Author 1: Jincheng Guan
Author 2: Jiachen Liu
Author 3: Hao Chen
Author 4: Wenhao Bi

Keywords: Multi-criteria decision-making; equipment evaluation; cloud model; VIKOR

PDF

Paper 128: Graph Convolutional Network for Occupational Disease Prediction with Multiple Dimensional Data

Abstract: Occupational diseases present a significant global challenge, affecting a vast number of workers. Accurate prediction of occupational disease incidence is crucial for effective prevention and control measures. Although deep learning methods have recently emerged as promising tools for disease forecasting, existing research often focuses solely on patient body parameters and disease symptoms, potentially overlooking vital diagnostic information. Addressing this gap, our study introduces a Deep Graph Convolutional Neural Network (DGCNN) designed to detect occupational diseases by utilizing demographic information, work environment data, and the intricate relationships between these data points. Experimental results demonstrate that our DGCNN method surpasses other state-of-the-art methods, achieving high performance with an Area Under the Curve (AUC) of 96.2%, an accuracy of 98.7%, and an F1-score of 75.2% on the testing set. This study not only highlights the effectiveness of DGCNNs in occupational disease prediction but also underscores the value of integrating diverse data types for comprehensive disease diagnosis.

Author 1: Khanh Nguyen-Trong
Author 2: Tuan Vu-Van
Author 3: Phuong Luong Thi Bich

Keywords: Occupational disease diagnostics; heterogeneous data; imbalanced data; Graph Convolutional Network (GCN); deep graph convolutional neural network

PDF

Paper 129: Construction Cost Estimation in Data-Poor Areas Using Grasshopper Optimization Algorithm-Guided Multi-Layer Perceptron and Transfer Learning

Abstract: Accurate construction cost estimation is crucial for completing projects within the planned timeframe and budget. Using machine learning methods to predict construction costs has become a new trend. However, machine learning methods typically require a large amount of data for model training, which makes it particularly challenging in data-poor areas. This paper proposes a novel method, Grasshopper Optimization Algorithm-Guided Multi-Layer Perceptron with Transfer Learning (GOA-MLP-TL), specifically designed for construction cost estimation in data-poor areas. GOA-MLP-TL utilizes the global optimal search capability of the GOA to optimize the parameters of the MLP network. Additionally, an adaptation layer is added into the MLP network, using the Maximum Mean Discrepancy (MMD) measure as a regularization to bridge the gap between the source and target domains. The GOA-MLP-TL can effectively leverage the model trained on data-rich area, and transfer the knowledge to adapt the model suitable for data-poor areas. The proposed approach is verified on two datasets from different areas, and the experimental result shows that, compared to the traditional machine learning method MLP and GOA-MLP without transfer learning, the correlation coefficient (R2) of the proposed GOA-MLP-TL is improved by 12.05% and 6.90%, respectively. This demonstrate the effectiveness of GOA-MLP-TL for the construction cost estimation task in the data-poor area.

Author 1: Xuan Sha
Author 2: Guoqing Dong
Author 3: Xiaolei Li
Author 4: Juan Sheng

Keywords: Construction cost estimation; multi-layer perceptron; grasshopper optimization algorithm; transfer learning; machine learning

PDF

Paper 130: Exploring Abstractive Text Summarization: Methods, Dataset, Evaluation, and Emerging Challenges

Abstract: The latest advanced models for abstractive summarization, which utilize encoder-decoder frameworks, produce exactly one summary for each source text. This systematic literature review (SLR) comprehensively examines the recent advancements in abstractive text summarization (ATS), a pivotal area in natural language processing (NLP) that aims to generate concise and coherent summaries from extensive text sources. We delve into the evolution of ATS, focusing on key aspects such as encoder-decoder architectures, innovative mechanisms like attention and pointer-generator models, training and optimization methods, datasets, and evaluation metrics. Our review analyzes a wide range of studies, highlighting the transition from traditional sequence-to-sequence models to more advanced approaches like Transformer-based architectures. We explore the integration of mechanisms such as attention, which enhances model interpretability and effectiveness, and pointer-generator networks, which adeptly balance between copying and generating text. The review also addresses the challenges in training these models, including issues related to dataset quality and diversity, particularly in low-resource languages. A critical analysis of evaluation metrics reveals a heavy reliance on ROUGE scores, prompting a discussion on the need for more nuanced evaluation methods that align closely with human judgment. Additionally, we identify and discuss emerging research gaps, such as the need for effective summary length control and the handling of model hallucination, which are crucial for the practical application of ATS. This SLR not only synthesizes current research trends and methodologies in ATS, but also provides insights into future directions, underscoring the importance of continuous innovation in model development, dataset enhancement, and evaluation strategies. Our findings aim to guide researchers and practitioners in navigating the evolving landscape of abstractive text summarization and in identifying areas ripe for future exploration and development.

Author 1: Yusuf Sunusi
Author 2: Nazlia Omar
Author 3: Lailatul Qadri Zakaria

Keywords: Abstractive text summarization; systematic literature review; natural language processing; evaluation metrics; dataset; computation linguistics

PDF

Paper 131: Identification of Agile Requirements Change Management Success Factors in Global Software Development Based on the Best-Worst Method

Abstract: To create products that are both cost effective and high quality, a majority of software development companies are following the principles of global software development, or GSD. One of the most significant and challenging stages of the agile software development process is requirements change management (RCM); however, the execution of agile software development activities is hindered by the geographical distance between the GSD teams, especially when it comes to agile requirements change management (ARCM). The literature claims that, in a particular context, ARCM can profit from applying Multi-Criteria Decision-Making (MCDM) techniques. Within the area of ARCM, an optimal framework can be offered constitutionally, thus presenting an effective decision-making process that ought to encourage higher consumer satisfaction with software projects created in such a way. A methodology for applying the MCDM method in the ARCM context is presented in this paper. In particular, we propose a model for investigating the prioritization of ARCM success factors in the GSD context based on a decision-making method; namely, the Best-Worst Method (BWM). The BWM’s ability to solve intricate decision-making problems with multiple criteria and alternatives is demonstrated by the proposed model’s findings.

Author 1: Abdulmajeed Aljuhani

Keywords: Best-Worst Method (BWM); Agile Requirements Change Management (ARCM); success factors; Global Software Development (GSD)

PDF

Paper 132: Degree Based Search: A Novel Graph Traversal Algorithm Using Degree Based Priority Queues

Abstract: This paper introduces a novel graph traversal algorithm, Degree Based Search, which leverages degree-based ordering and priority queues to efficiently identify shortest paths in complex graph structures. Our method prioritizes nodes based on their degrees, enhancing exploration of related components and offering flexibility in diverse scenarios. Comparative analysis demonstrates superior performance of Degree Based Search in accelerating path discovery compared to traditional methods like Breadth First Search and Depth First Search. This approach improves exploration by focusing on related components. Using a priority queue ensures optimal node selection; the method iteratively chooses nodes with the highest or lowest degree. Based on this concept, we classify our approach into two distinct algorithms: the Ascendant Node First Search, which prioritizes nodes with the highest degree, and the Descent Node First Search , which prioritizes nodes with the lowest degree. This methodology offers diversity and flexibility in graph exploration, accommodating various scenarios and maximizing efficiency in navigating complex graph structures. The study demonstrates the Degree based Searching algorithm’s efficacy in accelerating path discovery within graphs. Experimental validation illustrates its proficiency in solving intricate tasks like detecting communities in Facebook networks. Moreover, its versatility shines across diverse domains, from autonomous driving to warehouse robotics and biological systems. This algorithm emerges as a potent tool for graph analysis, efficiently traversing graphs and significantly enhancing performance. Its wide applicability unlocks novel possibilities in various scenarios, advancing graph-related applications.

Author 1: Shyma P V
Author 2: Sanil Shanker K P

Keywords: Graph traversal; degree based search algorithm; ascendant node; ascendant node first searching algorithm; descent node; descent node first searching algorithm

PDF

Paper 133: IPD-Net: Detecting AI-Generated Images via Inter-Patch Dependencies

Abstract: With the rapid development of generative models, the fidelity of AI-generated images has almost reached a level that is difficult for humans to distinguish true from fake. The rapid development of this technology may lead to the widespread dissemination of fake content. Therefore, developing effective AI-generated image detectors has become very important. However, current detectors still have limitations in their ability to generalize detection tasks across different generative models. In this paper, we propose an efficient and simple neural network framework based on inter-patch dependencies, called IPD-Net, for detecting AI-generated images produced by various generative models. Previous research has shown that there are inconsistencies in the inter-pixel relations between the rich texture region and the poor texture region in AI-generated images. Based on this principle, our IPD-Net uses a self-attention calculation method to model the dependencies between all patches within an image. This enables our IPD-Net to self-learn how to extract appropriate inter-patch dependencies and classify them, further improving detection efficiency. We perform experimental evaluations on the CNNSpot-DS and GenImage datasets. Experimental results show that our IPD-Net outperforms several state-of-the-art baseline models on multiple metrics and has good generalization ability.

Author 1: Jiahan Chen
Author 2: Mengtin Lo
Author 3: Hailiang Liao
Author 4: Tianlin Huang

Keywords: AI-generated image detection; image forensics; self-attention mechanism

PDF

Paper 134: An FPA-Optimized XGBoost Stacking for Multi-Class Imbalanced Network Attack Detection

Abstract: Network anomaly detection systems face challenges with imbalanced datasets, particularly in classifying underrepresented attack types. This study proposes a novel framework for improving F1-scores in multi-class imbalanced network attack detection using the UNSW-NB15 dataset, without resorting to resampling techniques. Our approach integrates Flower Pollination Algorithm-based hyperparameter tuning with an ensemble of XGBoost classifiers in a stacking configuration. Experimental results show that our FPA-XGBoost-Stacking model significantly outperforms individual XGBoost classifiers and existing ensemble models. The model achieved a higher overall weighted F1-score compare to the individual XGBoost classifier and Thockchom et al.’s heterogeneous stacking ensemble. Our approach demonstrated remarkable effectiveness across various levels of class imbalance, for example Analysis and Backdoor which is highly underrepresented classes, and DoS which is moderately underrepresented class. This research contributes to more effective network security systems by offering a solution for imbalanced classification without resampling techniques’ drawbacks. It demonstrates that homogeneous stacking with XGBoost can outperform heterogeneous approaches for skewed class distributions. Future work will extend this approach to other cybersecurity datasets and explore its applicability in real-time network environments.

Author 1: Hui Fern Soon
Author 2: Amiza Amir
Author 3: Hiromitsu Nishizaki
Author 4: Nik Adilah Hanin Zahri
Author 5: Latifah Munirah Kamarudin

Keywords: Intrusion detection; multi-class imbalanced classification; ensemble learning approaches

PDF

Paper 135: Deep Learning and Web Applications Vulnerabilities Detection: An Approach Based on Large Language Models

Abstract: Web applications are part of the daily life of Internet users, who find services in all sectors of activity. Web applications have become the target of malicious users. They exploit web application vulnerabilities to gain access to unauthorized resources and sensitive data, with consequences for users and businesses alike. The growing complexity of web techniques makes traditional web vulnerability detection methods less effective. These methods tend to generate false positives, and their implementation requires cybersecurity expertise. As for Machine Learning/Deep Learning-based web vulnerability detection techniques, they require large datasets for model training. Unfortunately, the lack of data and its obsolescence make these models inoperable. The emergence of large language models and their success in natural language processing offers new prospects for web vulnerability detection. Large language models can be fine-tuned with little data to perform specific tasks. In this paper, we propose an approach based on large language models for web application vulnerability detection.

Author 1: Sidwendluian Romaric Nana
Author 2: Didier Bassole
Author 3: Desire Guel
Author 4: Oumarou Sie

Keywords: Deep learning; web application; vulnerability; detection; large language model

PDF

Paper 136: Exploring Effective Diagnostic and Therapeutic Strategies for Deep Vein Thrombosis in High-Risk Patients: A Study

Abstract: Blood clots formed in blood vessels are termed as Thrombus. The pivotal strategy in diagnosing the early-stage thrombus, plays a vital role. Most commonly, blood clot occurs in the calf muscles of the lower extremities which leads to Deep Vein Thrombosis (DVT). Vulnerable patients are those who are involved in prolonged bed rest post-surgery, and the patients who are already affected with stroke, acute ischemia, cerebral palsy, etc. According to a report by the World Health Organization (WHO), nearly 900,000 people are affected annually, with approximately 100,000 dies each year At present blood clots can be identified using blood tests such as D-dimer blood tests, Cardiac Biomarkers and some imaging modalities like Doppler ultrasound, venography, Magnetic resonance imaging (MRI), computed tomography (CT). We have elaborately discussed the importance of emphasizing diagnostic yield and incidence of DVT, focusing on the risk factors available for DVT diagnostic and therapeutic techniques. The research addresses DVT incidence, diagnostic strategies, and therapeutic interventions; the efficacy of VR rehabilitation and treatment modalities; challenges related to artificial intelligence (AI)-based treatments; and explores the potential benefits of different game types in DVT management. This study aims to bridge the gap between research and real-time application by providing a wide range of strategies that comprise both basic and state-of-the-art techniques. It is a vital source for researchers and experts, providing perceptions into the effective development of advanced medical devices. The study concludes with a summary of point-of-care diagnosis, rehab therapy, and an exploration of various game types, providing future insights.

Author 1: Pavihaa Lakshmi B
Author 2: Vidhya S

Keywords: Diagnosis; DVT; game-based therapy; head-mounted display; rehabilitation therapy; virtual reality

PDF

Paper 137: Automated Detection of Offensive Images and Sarcastic Memes in Social Media Through NLP

Abstract: In this digital era, social media is one of the key platforms for collecting customer feedback and reflecting their views on various aspects, including products, services, brands, events, and other topics of interest. However, there is a rise of sarcastic memes on social media, which often convey contrary meaning to the implied sentiments and challenge traditional machine learning identification techniques. The memes, blending text and visuals on social media, are difficult to discern solely from the captions or images, as their humor often relies on subtle contextual cues requiring a nuanced understanding for accurate interpretation. Our study introduces Offensive Images and Sarcastic Memes Detection to address this problem. Our model employs various techniques to identify sarcastic memes and offensive images. The model uses Optical Character Recognition (OCR) and bidirectional long-short term memory (Bi-LSTM) for sarcastic meme detection. For offensive image detection, the model employs Autoencoder LSTM, deep learning models such as Densenet and mobilenet, and computer vision techniques like Feature Fusion Process (FFP) based on Transfer Learning (TL) with Image Augmentation. The study showcases the effectiveness of the proposed methods in achieving high accuracy in detecting offensive content across different modalities, such as text, memes, and images. Based on tests conducted on real-world datasets, our model has demonstrated an accuracy rate of 92% on the Hateful Memes Challenge dataset. The proposed methodology has also achieved a Testing Accuracy (TA) of 95.7% for Densenet with transfer learning on the NPDI dataset and 95.12% on the Pornography dataset. Moreover, implementing Transfer Learning with a Feature Fusion Process (FFP) has resulted in a TA of 99.45% for the NPDI dataset and 98.5% for the Pornography dataset.

Author 1: Tummala Purnima
Author 2: Ch Koteswara Rao

Keywords: Deep learning; natural language processing; offensive images; sarcastic memes; toxic content detection

PDF

Paper 138: Log-Driven Conformance Checking Approximation Method Based on Machine Learning Model

Abstract: Conformance checking techniques are usually used to determine to what degree a process model and real execution trace correspond to each other. Most of the state-of-the-art techniques to calculate conformance value provide an exact value under the circumstance that the reference model of a business system is known. However, in many real applications, the reference model is unknown or changed for various reasons, so the initial known reference model is no longer feasible, and only some historical event execution traces with its corresponding conformance value are retained. This paper proposes a log drivened conformance checking method, which tackles two perspective issues, the first is presenting an approach to calculate the approximate conformance checking value much faster than the existing methods using machine learning method. The second is presenting an approach to conduct conformance checking in probabilistic circumstances. Both kinds of approaches are from the perspective of no reference model is known and only historical event traces and their corresponding fitness can be used as train data. Specifically, for large event data, the computing time of the proposed methods is shorter than those align-based methods, and the baseling methods includes k-nearest neighboring, random forest, quadratic discriminant analysis, linear discriminant analysis, gated recurrent unit and long short-term memory. Experimental results show that adding a machine learning classification vector in the training set as preprocessing for train data can obtain a higher conformance checking value compared with the training sample without increasing the classification vector. Simultaneously, when conducted in processes with probabilities, the proposed log-log conformance checking approach can detect more inconsistent behaviors. The proposed method provides a new approach to improve the efficiency and accuracy of conformance checking. It enhances the management efficiency of business processes, potentially reducing costs and risks, and can be applied to conformance checking of complex processes in the future.

Author 1: Huan Fang
Author 2: Sichen Zhang
Author 3: Zhenhui Mei

Keywords: Conformance checking; fitness; log driven; machine learning; deep learning; probabilities

PDF

Paper 139: Classification of Spatial Data Based on K-means and Vorono¨ı Diagram

Abstract: This paper is focusing on the problem of the time taken by different algorithms to search data in a large database. The execution time of these algorithms becomes high, in the case of searching data in a non-redundant data, distributed in different database sites where the research consists of reading on each site for finding data. The main purpose is to establish adapted models to represent data in order to facilitate data research. This paper describes a classification of spatial data using a combination of k-means algorithm and vorono¨ı diagram to determine different clusters, representing different group of database sites. The advantages of classification is made through the k-means algorithm that defines the best number and the centers of required clusters and voronoı¨ diagram which gives definitely the delineation of the area with margins, representing the model of organizing data. A composition of K-mean algorithm followed by voronoı¨ diagram has been implemented on simulation data in order to get the clusters, where future parallel research can be realized on different cluster to improve the execution time. In application to e-health in GIS, a best distribution of medical center and available services, will contribute strongly to facilitate population well-being.

Author 1: Moubaric KABORE
Author 2: Béné-wendé Odilon Isaïe ZOUNGRANA
Author 3: Abdoulaye SERE

Keywords: Classification; K-means; vorono¨ı diagram; GIS; big data; data research

PDF

Paper 140: Unexpected Trajectory Detection Based on the Geometrical Features of AIS-Generated Ship Tracks

Abstract: Due to the efficiency and reliability of delivering goods by ships, maritime transport has been the backbone of global trade. In normal circumstances, a ship’s voyage is expected to assure the safety of life at sea, efficient and safe navigation, and protection of the maritime environment. However, ships may demonstrate unexpected behavior due to certain situations, such as machinery malfunction, unexpected bad weather, and other emergencies, as well as involvement in illicit activities. These situations pose threats to the safety and security of maritime transport. The expansion of the threats makes manual surveillance inefficient, which involves extensive labor and is prone to oversight. Thus, automated surveillance systems are required. This paper proposes a method to detect the unexpected behavior of ships based on the Automatic Identification System (AIS) data. The method exploits the geometrical features of AIS-generated trajectories to identify unexpected trajectory, which could be a deviation from the common routes, loitering, or both deviating and loitering. It introduces novel formulas for calculating trajectory redundancy and curvature features. The DBSCAN clustering is applied based on the features to classify trajectories as expected or unexpected. Unlike existing methods, the proposed technique does not require trajectory-to-image conversion or training of labeled datasets. The technique was tested on real-world AIS data from the South China Sea, Western Indonesia, Singapore, and Malaysian waters between July 2021 and February 2022. The experimental results demonstrate the method’s feasibility in detecting deviating and loitering behaviors. Evaluation on a labeled dataset shows superior performance compared to existing loitering detection methods across multiple metrics, with 99% accuracy and 100% precision in identifying loitering trajectories. The proposed method aims to provide maritime authorities and fleet owners with an efficient tool for monitoring ship behaviors in real time regarding safety, security, and economic concerns.

Author 1: Wayan Mahardhika Wijaya
Author 2: Yasuhiro Nakamura

Keywords: Automatic identification system; vessel trajectory classification; unexpected behavior detection; data mining; data-driven decision support

PDF

Paper 141: SecureTransfer: A Transfer Learning Based Poison Attack Detection in ML Systems

Abstract: Critical systems are increasingly being integrated with machine learning (ML) models, which exposes them to a range of adversarial attacks.The vulnerability of machine learning systems to hostile attacks has drawn a lot of attention in recent years. When harmful input is added to the training set, it can lead to poison attacks, which can seriously impair model performance and threaten system security. Poison attacks pose a serious risk since they involve the injection of malicious data into the training set by adversaries, which influences the model’s performance during inference. It’s necessary to identify these poison attacks in order to preserve the reliability and security of machine learning systems. A novel method based on transfer learning is proposed to identify poisoning attacks in machine learning systems.The methodology for generating poison data is initially created and later implemented using transfer learning techniques. Here, the poisonous data is detected using the pre-trained VGG16 model. This method can also be used in distributed Machine learning systems with scattered data and computation across several nodes. Benchmark datasets are used to evaluate this strategy in order to prove the effectiveness of proposed method. Some real-time applications, advantages, limitations and future work are also discussed here.

Author 1: Archa A T
Author 2: K. Kartheeban

Keywords: Poison attacks; machine learning security; transfer learning; generative adversarial networks; convolutional neural networks; VGG16

PDF

Paper 142: Overview of the Complex Landscape and Future Directions of Ethics in Light of Emerging Technologies

Abstract: In today’s rapidly evolving technological landscape, the ethical dimensions of information technology (IT) have become increasingly prominent, influencing everything from algorithmic decision-making to data privacy and cybersecurity. This paper offers a thorough examination of the multifaceted ethical considerations inherent in information Technology, spanning various domains such as artificial intelligence (AI), big data analytics, cybersecurity practices, quantum computing, human behavior, environmental impact, and more. Through an in-depth analysis of real-world cases and existing research literature, this paper explores the ethical dilemmas and challenges encountered by stakeholders across the IT ecosystem. Central to the discussion are themes of transparency, accountability, fairness, and privacy protection, which are crucial for fostering trust and ethical behavior in the design, deployment, and governance of IT systems. The paper underscores the importance of integrating ethical principles into the technological innovation, emphasizing the need for proactive measures to mitigate biases, uphold individual rights, and promote equitable outcomes. It also explores the ethical implications of emerging technologies such as AI, quantum computing, and the Internet of Things (IoT), shedding light on the potential risks and benefits they entail. Furthermore, the paper outlines future directions and strategies for advancing ethical practices in IT, advocating for multidisciplinary collaboration, global regulatory frameworks, corporate social responsibility initiatives, and continuous ethical inquiry. By providing a comprehensive roadmap for navigating ethical considerations in IT, this paper aims to empower policymakers, industry professionals, researchers, and educators to make informed decisions and promote a more ethical and sustainable digital future.

Author 1: Marianne A. Azer
Author 2: Rasha Samir

Keywords: Artificial intelligence; cybersecurity; data privacy; digital ethics; ethical considerations; information security; machine learning; technology ethics; transparency

PDF

The Science and Information (SAI) Organization
BACK TO TOP

Computer Science Journal

  • About the Journal
  • Call for Papers
  • Submit Paper
  • Indexing

Our Conferences

  • Computing Conference
  • Intelligent Systems Conference
  • Future Technologies Conference
  • Communication Conference

Help & Support

  • Contact Us
  • About Us
  • Terms and Conditions
  • Privacy Policy

© The Science and Information (SAI) Organization Limited. All rights reserved. Registered in England and Wales. Company Number 8933205. thesai.org